
Data Engineer
- Milano
- Tempo indeterminato
- Full time
- Data Pipeline Development – Design, build, and maintain scalable data pipelines using Fivetran for efficient data ingestion and dbt for transforming data within Redshift, ensuring data quality and reliability.
- Data Integration & Transfer – Develop and manage data integration workflows with Kinesis Data Streams, ensuring efficient and reliable real-time data transfer across various systems within our architecture.
- Data Warehousing – Build and maintain high-performance, reliable, and scalable data warehousing solutions in Redshift, utilizing Redshift Spectrum for seamless querying of data in S3 and Data Sharing for secure cross-account data collaboration.
- Data Modeling & Schema Design – Create and manage efficient data models and schemas using dbt to support comprehensive reporting, advanced analytics, and critical business intelligence needs across the organization.
- Reporting & Analytics Solutions – Develop and maintain robust reporting and analytics systems in Looker to enable intuitive data exploration, insightful visualization, and effective self-service business intelligence capabilities for stakeholders.
- Database Performance Optimization – Monitor and optimize Redshift database performance, ensuring data is readily accessible, delivered accurately, and available on time to meet the demanding needs of our data consumers.
- Cross-Functional Collaboration – Collaborate closely with business teams across various departments to thoroughly understand their data requirements and translate them into effective and scalable data engineering solutions that drive business value.
- Data Security & Compliance – Implement and maintain robust data security and privacy measures across all data pipelines and warehousing solutions to ensure strict compliance with regulations and protect the integrity of our valuable data assets.
- Documentation & Knowledge Sharing – Document data engineering workflows, implemented solutions, and best practices to support comprehensive knowledge sharing within the team and facilitate the efficient onboarding of new team members.
- Relevant education and technical background – You have a degree in Computer Science, Computer Engineering, or a related field, with at least 3 years of experience, ideally in designing and implementing complex data warehousing solutions.
- Programming Expertise – You are highly skilled in SQL for data manipulation and analysis and have practical experience with at least one of the following: Python, Java, or Scala for building robust data pipelines.
- Architectural Knowledge – You are familiar with both batch and real-time streaming data architectures and understand the principles behind designing scalable and resilient data systems.
- Data-Oriented Mindset – You have a strong passion for deeply understanding data—its meaning, the critical importance of preparing it correctly, and the methodologies for ensuring its quality and unwavering consistency.
- Structured communication and stakeholder management – You communicate clearly and persuasively, adapting your approach to build trust and engage effectively with diverse stakeholders across technical and business teams.
- Language proficiency – You are fluent in both Italian and English, enabling seamless communication within our international team and with stakeholders.
- Passion for FinTech or Startup environments – You are genuinely passionate about the fast-paced and innovative FinTech or startup ecosystem and are eager to contribute to our mission.
- Unlimited paid time off
- Psychological support & mental health webinars with Serenis
- Flexible hybrid working system
- Extended parental leave
- Childcare leave
- Professional development programmes
- Internal mobility program
- Language classes with Preply
- Internal workshops & training
- Stock Option Plan (with additional grants often provided based on performance)
- International relocation support
- Competitive salary
- Flexible Benefit budget
- Meal vouchers