- Design and build data structures on MPP platform like AWS RedShift and or Druid.io.
- Design and build highly scalable data pipelines using AWS tools like Glue (Spark based), Data Pipeline, Lambda.
- Translate complex business requirements into scalable technical solutions.
- Strong understanding of analytics needs.
- Collaborate with the team on building dashboards, using Self-Service tools like Apache Superset or Tableau, and data analysis to support business.
- Collaborate with multiple cross-functional teams and work on solutions which have a larger impact on Xapo business.
- In-depth understanding of data structures and algorithms.
- Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data.
- Experience in designing and developing ETL data pipelines.
- Proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs.
- Programming experience in building high-quality software. Skills with Python or Scala preferred.
- Strong analytical and communication skills.
NICE TO HAVE SKILLS
- Work/project experience with big data and advanced programming languages.
- Experience using Java, Spark, Hive, Oozie, Kafka, and Map Reduce.
- Work experience with AWS tools to process data (Glue, Pipeline, Kinesis, Lambda, etc).
- Experience with or advanced courses on data science and machine learning.
- A dedicated workspace. A reliable internet connection with the fastest speed possible in your area.Devices and other essential equipment that meet minimal technical specifications.Alignment with Our Values.
- Shape the Future: Improve lives through cutting-edge technology, work remotely from anywhere in the worldOwn Your Success: Receive attractive remuneration, enjoy an autonomous work culture and flexible hours, apply your expertise to meaningful work every dayExpect Excellence: Collaborate, learn, and grow with a high performance team.