Data Engineer (Azure/Python-/atabricks) - NL/FR + ENG

Brussels, Belgium

Alter Solutions

IT and Cybersecurity consulting services in Europe, America and Africa. Discover our expertise in Software development and Cloud computing.

View company page

Company Description

Alter Solutions Benelux is an IT Consultancy Company, promoter of Digital Transformation, part of the Alter Solutions Group, created in 2006, in Paris. In 2022, Alter Solutions joined the act digital group, constituting a global community of talent in Technology, with presence in twelve countries: Germany, Belgium, Brazil, Canada, United States of America, Morocco, Spain, France, Luxembourg, Poland, Portugal and Serbia. In 2022, we were also certified as a Great Place to Work®. In the Benelux region, we partner with over 20 clients and a team of over 50 people, working in projects for industries as diverse as banking, insurance, transportation, aviation, energy, and telecom. Know more about Life at Alter: https://www.linkedin.com/company/alter-solutions-group/life/altersolutionsgroup

Job Description

Service description:

We are looking for a Data Engineer, who will primarily join one of our product Teams. It is an opportunity to mark with your signature our corporate Data organization, and contribute indirectly to better product to manage our Customer, as more qualitative data or service around data for our customer. 

We are looking for Data Engineers able to create data pipelines, efficient storage structures, powerful materialized views in different analytical technologies, but also data exchange endpoints at destination of our users. To some extent, you’ll also have to interact with aspect related to governance tools (Glossary, Modeling, Lineage, or Data Quality…). 

What will be your mission and for which competences will you need an Extensive hands-on experience (> 3 to 5 years preferable) in 2 out of the 4 following categories: 

  • Implementing solutions with on-premise data technologies stack from Microsoft (SSIS, SSAS, and SQL Server) 
  • Implementing data pipelines using Azure services such as Azure Databricks, but also to some extent Azure Data Factory, Azure Functions, Azure Stream/log Analytics, and Azure DevOps 
  • Implementing data pipelines or data enrichments with Python in a Databricks environment 
  • Open to learn and jump on new technologies (on the ones listed above, or also Redis, RabbitMQ, Neo4j, or Apache Arrow, …) 
  • Able and willing to interact with business analysts and stakeholders to refine some requirements and to present reusable and integrated solution 
  • Able and willing to contribute to extensive testing of the solution, as the reinforcement of devops principles within the team.
  • Able and willing to contribute to the writing and structuration of documentation 

What you will NOT be doing 

  • You shall not act from an Ivory Technical Tower. 
  • You don’t not make decisions for the Business but rather advise them with careful logic or showing your reasoning. 
  • You shall not deliver technical solution, rather product features.

What we’re expecting from you in your product team 

  • Participate to the refinement of user stories, share your point of view (challenge proposition and propose alternatives) with the product team.
  • Define technical design (or architecture), collaborate with teammates, ask for reviews and advises and write down the initial version. Openly discuss any evolution of the design/architecture.
  • Implement the feature/user story in parallel of implementation of the unit tests.
  • Implement corresponding monitoring, metadata.
  • Participate to the definition of end-to-end test scenarios.
  • Peer review code written by teammates and ask for peer review of your own code. Accept feedback from peers.
  • Participate to end-users’ documentation, demos and roadshows.
  • Be engaged in any retrospective and continuous improvement activities.
  • Actively give feedback to your teammates about their contributions

What we’re expecting from you in the Data Chapter 

  • Review some architectures or designs proposed by other teams and share your experience and knowledge.
  • Write down decisions taken in a format promoting reusability.
  • Contribute to design and architectural blueprints combining previous decisions.
  • Participate to learning sessions by presenting on a topic or as an attendee.

Requirements:

     

     

    Qualifications

    As a candidate for this mission, you should also be able to demonstrate non-technical skills: 

    • Able to challenge your interlocutors by leveraging your rational thinking, and no non-sense philosophy. 
    • Continuously look at data in a transversal way (no siloes), across the entire Enterprise, to maximize coherence, reuse and adoption. 
    • Track record of driving results through continuous improvement 
    • Customer-oriented 
    • Thinking Iterative
    • Analytical approach to problem-solving and a track record of driving results through continuous improvement 
    • Team player breathing respect, open-mind, dare, challenge, innovation and one team/voice for the Customer 
    • Product-oriented mindset 
    • Enjoy sharing ideas with the team (or with other teams) and contribute to the product success 
    • Language skills: fluent in English (must have) AND fluent in German/French or Dutch (soft requirement)
    • Good communication skills

    Additional Information

    • Work regime: full time
    • Location: 2days/week Brussels and 3days/week remote (after training)

     

    Apply now Apply later
    • Share this job via
    • or

    * Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

    Tags: Architecture Arrow Azure Banking Databricks Data pipelines Data quality DevOps Neo4j Pipelines Python RabbitMQ SQL SSIS Testing

    Region: Europe
    Country: Belgium
    Job stats:  2  0  0
    Category: Engineering Jobs

    More jobs like this

    Explore more AI, ML, Data Science career opportunities

    Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.