The world will be unrecognisable in 5 years.
Machine learning models are driving our cars , testing our eyesight , detecting our cancer , giving sight to the blind , giving speech to the mute , and dictating what we consume, enjoy, and think . These AI systems are already an integral part of our lives and will shape our future as a species.
Soon, we’ll conjure unlimited content: from never-ending TV series (where we’re the main character) to personalised tutors that are infinitely patient and leave no student behind. We’ll augment our memories with foundation models —individually tailored to us through RLHF and connected directly to our thoughts via Brain-Machine Interfaces—blurring the lines between organic and machine intelligence and ushering in the next generation of human development.
This future demands immense, globally accessible, uncensorable, computational power. Gensyn is the machine learning compute protocol that translates machine learning compute into an always-on commodity resource—outside of centralised control and as ubiquitous as electricity—accelerating AI progress and ensuring that this revolutionary technology is accessible to all of humanity through a free market.
Our Principles:
AUTONOMY
- Don’t ask for permission – we have a constraint culture , not a permission culture.
- Claim ownership of any work stream and set its goals/deadlines, rather than waiting to be assigned work or relying on job specs.
- Push & pull context on your work rather than waiting for information from others and assuming people know what you’re doing.
- No middle managers – we don’t (and will likely never) have middle managers.
FOCUS
- Small team – misalignment and politics scale super-linearly with team size. Small protocol teams rival much larger traditional teams.
- Thin protocol – build and design thinly .
- Reject waste – guard the company’s time, rather than wasting it in meetings without clear purpose/focus, or bikeshedding .
REJECT MEDIOCRITY
- Give direct feedback to everyone immediately rather than avoiding unpopularity , expecting things to improve naturally, or trading short-term pain for extreme long-term pain.
- Embrace an extreme learning rate rather than assuming limits to your ability/knowledge.
Responsibilities
Train highly distributed models – over uniquely decentralised and heterogeneous infrastructure, rather than GPU clusters
Research novel model architectures – design, build, test, and iterate over completely new ways of building neural networks; with an eye towards achieving byzantine tolerance in a trustless compute setting
Publish & collaborate – write research papers targeting top-tier AI conferences such as AAAI, ICML, IJCAI and NIPS, and collaborate with experts from universities and research institutes
Engineering support – work with the engineering team on wider issues concerning ML (e.g. reproducible training)
Follow best practices – build in the open with a keen focus on designing, testing, and documenting your code
Write & engage – contribute to technical reports/papers describing the system and discuss with the community
Minimum requirements
✅ Extremely strong research background – with publications at major machine learning conferences (or commensurate industrial experience)
✅ Strong background in machine learning and distributed systems
✅ Hands-on experience with distributed model training
✅ Highly self-motivated with excellent verbal and written communication skills
✅ Comfortable working in an applied research environment – with extremely high autonomy and unpredictable timelines
Nice to haves
Communication backend experience – e.g. NCCL, GLOO and MPI
Experience training Large Language Models (LLMs)
Compensation / Benefits:
Competitive salary + share of equity and token pool
Fully remote work – we hire between the West Coast (PT) and Central Europe (CET) time zones
4x all expenses paid company retreats around the world, per year
Whatever equipment you need
❤️ Paid sick leave
Private health, vision, and dental insurance – including spouse/dependents [ only]
Verbal communication Artificial intelligence (AI) Self-motivation Researcher distributed-systems Written communication skills Machine Learning ownership mpi gloo neural-network autonomy LLM