Rust Job: Senior Data Engineer (remote)

Job added on


Penguin Formula


Remote Position
(From Everywhere/No Office Location)

Job type


Rust Job Details

Company Description

We Cook iT is an international software house that delivers software development to its corporate customers by providing highly skilled, communicative IT professionals to build their customized products through outsourcing, nearshoring and turn-key projects’ solutions.

How do we differentiate ourselves? By investing in the professional growth and personal care of our software developers. We provide them with a premium service so that they can do the same for our customers.

Our talented team holds software engineers and sales experts, spread across offices in Europe and South America, and we represent a go-getting and driven company who aims to be an inspiring software house, knowing our future relies on IT.

We Cook iT stand for growth, support, dynamic, companionship and communication and we’re looking for a [main characteristic] [position] to join team We Cook iT. Our head office is located in Portugal in the heart of Lisbon (Avenida da Liberdade) and we work mainly for the European market.

Job Description
  • responsible for expanding and optimizing our data and data pipeline architecture
  • optimizing the data flow and collection for cross-functional teams
  • building data pipelines as development of processing (transforming, aggregating, wrangling) data
  • optimizing data systems and building them up from the ground
  • Coaching of junior engineers

  • Multi-year experience working in a data engineer or software developer role and/or demonstrable involvements in / contributions to open source projects.
  • Practical experience in at least 2 of the following programming languages: Python, Scala, Java, Kotlin, Go, Rust, C#, F#, C, C++.
  • Considerable experience with modern programming paradigms, e. g. functional, event-driven, asynchronous, component-oriented, …
  • Ability to sketch software and system architectures on the whiteboard.
  • Aware of and experienced in applying best practices for distributed systems.
  • Demonstrable expertise regarding some components like Spark, Spark Streaming, Kafka, Flink, ElasticSearch, Prometheus, Cassandra, Hive, Kudu, … from a data engineer and software engineer perspective.
  • Experience with building and maintaining ‘big data’ data pipelines, architectures, data sets and tools like Airflow, Azure Data Factory, NiFi, Flume, Beam, etc.
  • Used to applying CI/CD and automation best practices and tools like Jenkins, Github Actions, ArgoCD, Ansible, Terraform, Helm, …
  • Take technical responsibility, initiative, leadership in projects, taking the role of a main data/software engineer.
  • Closely collaborate with data/software architects, challenge their architecture and design drafts, refine them together with them and bring them to Prod maturity.
  • Experienced in and motivated by navigating in open source community contexts.

Bonus points for practical experience with any of

  • Implementing libraries and frameworks for internal users (as opposed to only applications)
  • Writing k8s operators

Additional Information

If you are looking for a fast-paced multinational company and have a desire to be part of the exciting state-of-the-art projects across Europe, send us your application in English.

We’re looking forward to hearing from you!