Do you want to use the skills & knowledge you have built as a Principal Data Engineer to actively make a difference in people’s lives?
This company is truly at the forefront of Artificial Intelligence within their industry and are the first to design a life-changing product which is driven solely by AI. They are a technology company who pairs the latest science and engineering with a knowledge acquired from over a decade’s worth of research in order to develop novel Machine Learning methods.
They are currently seeking a passionate and driven Principal Data Engineer with a solid background in software development as well as experience in the design and implementation of automated data pipelines. As a Principal Data Engineer you will be an integral member of a world-class team of engineers, researchers and scientists focused on applying AI to achieve unprecedented output gains within this life-changing industry.
As a Principal Data Engineer, you will be based in the brand-new Oxford tech office, you will be working as part of an agile team on greenfield projects meaning that you will be able to have direct input into what modern tools and cloud-based technologies are being used. You will work closely with Data Scientists, Software Engineers and Research Engineers to develop multiple data pipelines capable of extracting, transforming and loading datasets for use when training Machine Learning models.
Your experience as a Principal Data Engineer working with database systems, designing data models and optimising data flow will allow you to have real impact and a huge influence on the overall design of the data architecture.
- The Design and implementation of automated data processing pipelines, and data models.
- Delivery of extensible and maintainable software.
- Work without the need for handholding on the full software project lifecycle, from inception through to execution.
- The continuous development of yourself where you will be proactive in building the knowledge necessary to successfully carry out data engineering tasks involving complex datasets.
- Provide mentorship to more junior members of the team if required.
- Stay up to date on new technology developments, understand the potential benefits to the platform and create plans for their implementation.
- Preferably (not compulsory) PhD in Computer Science, Mathematics, Physics, Engineering or a relevant field.
- Strong programming skills in Python with familiarity with numpy & pandas desirable.
- Exceptional knowledge of database systems, relational and non-relational, SQL syntax, schema design and query optimisation.
- Proven track record of developing and implementing data pipelines to scale using cloud-based technologies.
- Excellent communication skills, organised, motivated & driven by technological advances.
- Strong team player with an inclusive mindset that is willing to listen and learn from others.
- Passionate about the idea of changing people’s lives through the utilisation of advanced technology.
- Experience with cloud infrastructure platforms e.g. AWS.
- Experience with workflow orchestration systems such as Airflow and Luigi.
- Experience with container tools such as Docker and Kubernetes.
- Experience with integration and deployment frameworks such as Jenkins and Travis.
- Experience with machine learning frameworks such as Scikit-learn and Pytorch.
- Experience using data structures to solve real-world problems.
- Experience with other programming languages such as Java or Scala.
- You will have multiple open source contributions demonstrating experience in software development.
Does this sound like a challenge you would like to get involved with?
If so, contact me at Logikk on 020 3005 4968 or [email protected]