• Bachelor’s degree or higher in Computer Science or a related field.
• Good understanding of distributed computing and big data architectures.
• Passion for software engineering and craftsman-like coding prowess.
• Proven experience in developing Big Data solutions in Hadoop Ecosystem using Apache NiFi, Kafka, Flume, Sqoop, Apache Atlas, Hive, HDFS, HBase and Spark (Hortonworks HDP and HDF preferred).
• Experience with at least one of the leading CDC (Change Data Capture) tools like Informatica PowerCenter.
• Development experience with at least one NoSQL database. HBase or Cassandra preferred.
• Polyglot development (4-5 years+): Capable of developing in Java and Scala with good understanding of functional programming, SOLID principles and, concurrency models and modularization.
• DevOps: Appreciates the CI and CD model and always builds to ease consumption and monitoring of the system. Experience with Maven (or Gradle or SBT) and Git preferred.
• Experience in Agile development including Scrum and other lean techniques.
• Should believe in You Build! You Ship! And You Run! Philosophy.
• Personal qualities such as creativity, tenacity, curiosity, and passion for deep technical excellence.
• Experience with Big Data migrations/transformations programs in the Data Warehousing and/or Business Intelligence areas.
• Experience with ETL tools like Talend, Pentaho, Attunity etc.
• Knowledge of Teradata, Netezza etc.
• Good grounding in NoSQL data stores such as Cassandra, Neo4j etc.
• Strong knowledge on computer algorithms.
• Experience with workload orchestration and automation tools like Oozie, Control-M etc.
• Experience in building self-contained applications using Docker, Vagrant. Chef.