Essentials (please DO NOT APPLY if you don't match all three) :
- You should have EU or Swiss nationality/C permit.
- You should be top 5% both at your Bachelor and at your Master.
- You should upload your full transcripts.
Role
- Design, build, and maintain data pipelines for processing and storing datasets.
- Combine structured and unstructured data from diverse data sources.
- Work closely with data scientists to create useful, consistent, and high-quality data.
- Automate deployment workflows of deep learning models using cloud platforms (AWS, Azure, GCP) or on-premise solutions.
- Optimize machine learning solutions for latency, throughput, and cost.
- Collaborate with deep learning engineers to integrate deep learning models into production environments.
Required qualifications and attitudes
- Bachelor+Master's (top 5%) in Computer Engineering or the like.
- Attitude and willingness to continuously learn and improve skills and knowledge.
- Capability to interact with clients, data scientists, ML engineers, and Dev Ops experts.
Required technical skills
- Fluency in Python and strong programming capabilities.
- Experience with deep learning libraries (Py Torch/Tensor Flow/ONNX) and Num Py.
- Understanding of relational databases (e.g., My SQL, Postgre SQL) and No SQL databases (e.g., Mongo DB).
- Knowledge of containerization tools (e.g., Docker).
- Expertise in CI/CD tools (e.g., Jenkins, Git Lab, Circle CI).
- Knowledge of microservices architecture and REST APIs.
- Fluency in Java or C/C++.
- Familiarity with data processing frameworks and cloud platforms.
- Previous related work experience.
We offer
- Stimulating scientific context and informal environment.
- Training, tutoring, and close relationship with leading-edge research and technology.
Seniority level
Entry level
Employment type
Full-time
Industries
IT Services and IT Consulting
#J-18808-Ljbffr