Senior Data Infrastructure
Are you passionate about how technology can make a real impact in cancer? Join us at kaiko.ai in building the state-of-the-art Data & AI platform, enabling large-scale training of multi-modal foundation models, and transforming the clinical workflow to deliver better patient outcomes.
Our culture
At Kaiko, we have an open, creative and non-hierarchical work atmosphere which offers continuous learning and direct impact in return for accountability and team spirit.
We offer flexibility - for instance, through remote working – alongside an expectation for managing and delivering your own goals; our team’s ownership, passion and shared commitment to improving health outcomes through data is something that sets us apart.
At the intersection of healthcare and data we recognize the implications on wellbeing and trust and approach our work with the utmost sensitivity. Data privacy, compliance and security are core to everything we do. Our open, creative environment gives talented people room to explore new ideas and we reward this with an attractive package and opportunities for further personal development.
About the role
We are seeking a highly skilled Senior Data Infrastructure with a passion for building scalable Data Platform and ensuring a high-availability experience to empower our AI research team in their daily work. You'll play a vital role in making our ambitious AI healthcare solutions a practical reality. This exciting role will be based in either The Netherlands or Switzerland.
Your responsibilities
- Design, build, and maintain data infrastructure systems such as distributed computing, data orchestration, distributed storage, streaming infrastructure while ensuring scalability, reliability, and security;
- Collaborate across domain teams with researchers, product teams and other stakeholders to support their data infrastructure needs;
- Ensure our data platform can scale reliably to the next several orders of magnitude;
- Engineer excellent and reliable data tooling and systems that enable collaboration both internally and with external parties.
Qualifications/requirements:
- 4+ years of experience in building and maintaining large-scale data infrastructure, with a focus on machine learning, data pipelines, orchestration, batch and real-time streaming data;
- Experience building and launching projects in a production software environment;
- Extensive experience building and managing various database technologies, both relational (e.g. MySQL, PostgreSQL) and NoSQL (e.g. MongoDB, Cassandra, or DynamoDB). Proficiency in database design, sharding, replication, and tuning for high-traffic environments;
- Hands-on experience building and managing at least one other data storage technology such as:
- cache (e.g. Redis);
- queue (e.g. RabbitMQ, Kafka);
- object storage (e.g. MinIO, AWS S3, Azure Blob Storage);
- vector database;
- distributed search (e.g. Elasticsearch);
- Experience with at least one cloud platform (e.g. AWS, Azure or Google Cloud);
- Experience with containerization (e.g. Docker) and orchestration tools (e.g. Kubernetes, Helm, Kustomize);
- Strong coding skills in at least one programming language (e.g. Python, Scala, Java, C++);
- Excellent analytical and problem-solving skills, with a knack for identifying and addressing bottlenecks, ensuring that backend systems perform optimally under varying loads;
- Self-motivated and able to work well in a fast-paced startup environment.
Nice to have:
- Experience in AI/ML environment;
- Experience of data backup strategy, disaster recovery, and multi-tenant data infrastructure;
- Track record of engineering large-scale data infrastructure that process and serve petabytes of data;
- Experience with CI/CD tools (e.g. GitLab CI/CD, Github Actions or CircleCI);
- Knowledge of monitoring, logging, alerting and observability tools (e.g. Prometheus, Grafana, ELK Stack or Datadog);
- Familiarity with infrastructure-as-code tools (e.g. Terraform, CloudFormation or Pulumi);
- Strong understanding of networking, security, and system administration concepts;
- You have personally caused an org wide data incident, survived it, and live to tell the tale.
This Senior Data Infrastructure position is a full-time role. It is important for the applicant to be a resident in The Netherlands or Switzerland, have a valid work permit and preferably be within commutable distance from our offices in Amsterdam or Zürich. Given the nature of Kaiko’s business and the fact that it deals with sensitive data, a Certificate of Conduct will be required upon finalizing the employment contract.
Our offer
- An inspirational, extremely talented and internationally diverse team which ‘builds a submarine whilst also operating it’ and loves doing that
- A unique working experience in a fast-growing company which intends to revolutionize healthcare
- Autonomy, flexibility and the opportunity to do your work in the way it works best to you, as long as you deliver on your responsibilities
- An attractive and competitive salary, a good pension plan and 25 vacation days per year
- We value your personal & professional development; together we decide how we can support your growth.
Want to join Kaiko as Senior Data Infrastructure? Send us your application online and we will contact you as soon as possible. At Kaiko we welcome everyone with equal enthusiasm. Should you have any questions, please do not hesitate to contact us at recruitment@kaiko.ai and/or visit Kaiko online at www.kaiko.ai
- Department
- Platform Engineering
- Locations
- Amsterdam (NKI-AvL), Zürich (Puls 5)
- Remote status
- Hybrid Remote
Senior Data Infrastructure
Loading application form
Already working at Kaiko?
Let’s recruit together and find your next colleague.