TheMathCompany is looking for Data Engineer through HackerEarth. The candidate should have 2-6 Years of experience. TheMathCompany Off-Campus 2020 Drive is hiring for the role of Data Engineer for BE/BS/MTech/MS in computer science or equivalent work experience. Interested candidates can apply for this recruitment drive and register through the given link. The detailed eligibility criteria and the application process are given below.
Here, you will get the information about the recruitment of TheMathCompany Off-Campus Jobs such as Name of Company, Job Location, Job Role or Profile, Experience or Fresher Required, Year of Passing, Qualification or Eligibility Criteria, and much more.
Job Summary of TheMathCompany Careers:
Company Name: TheMathCompany
Qualification: BE/BS/MTech/MS in computer science or equivalent work experience.
Job Profile: Data Engineer
Location: Bangalore
Job Description of TheMathCompany hiring Data Engineer-:
TheMathCompany is a modern, hybrid analytics firm that builds contextual AI assets for Fortune 500 & equivalent companies. Our well-rounded consulting model addresses pressing gaps that exist in conventional analytics service provider models and off-the-shelf products. We offer the collective advantages of customization, diverse problem-solving capabilities, speed of delivery, reusability, and scalability – all powered by our proprietary AI master engine, Co.dx.
We, TheMathCompany is 4 years old firm that helps organizations with their analytics transformation journey. We are close to 350 employees which include Data Scientists, Data Engineers, and Visualization experts.
We were founded in 2016 by industry leaders. MathCo. has organic growth since day one. We work with over 30+ Fortune 500 clients from industries like CPG, Retail, Insurance, and Banking, across the US, UK, Singapore, Hong Kong, Europe, and the Middle East, and have been recognized as one of the fastest-growing start-ups in the world.
We are currently looking for Associate/Senior Associate – Data Engineering to join our passionate team in our Bangalore office.
Eligibility requirements:
- 2-6 years of experience
- Knowledge in Python, Java, Scala, and other coding languages along with Hadoop, Hive, Spark, SQL, Big data and Cloud, Data Warehousing, Databases, etc.
Challenge format:
- 20 MCQs on relevant topics
- 2 programming questions open to all languages (Brownie points for coding in Python, Java, or Scala)
What does the interview process look like at TheMathCompany?
Our hiring process is not defined to find the exact profile but identify folks who have the right attitude, learning/growth mindset and are a good culture fit. The interview rounds are two-fold:
- Case Study / Technical Round
- Culture Fit Round
For Data Engineer-Associate:
Experience Required: 2-4 Years
CTC: INR 7Lto INR 14 L
Job Location: Bangalore
Job Opening: 10
WHAT’S IN IT FOR YOU:
- An exciting opportunity to be a part of the growth journey of one of the fastest-growing AI & ML firms – scope for experimentation, the big & small victories, the learnings, and everything in between
- Our in-house learning and development cell – Co.ach, run by world-class data analytics experts enable our folks to stay up to date with the latest trends and technologies
- At TheMathCompany, we insist on a culture that provides us all with enough flexibility to accommodate our personal lives without compromising on the dream of building a great company
- We are changing the way companies go about executing enterprise-wide data engineering and data science initiatives, and we’d love to have you grow with us on this journey
ROLE DESCRIPTION:
As a data engineer, you’ll have an opportunity to work on the universe of data and solve some very interesting problems by creating and maintaining scalable data pipelines dealing with petabytes of data. All our projects entail working on cutting edge technologies, petabyte-scale data processing systems, data warehouses and data lakes to help manage the ever-growing information needs of our customers.
The responsibilities are detailed as below:
- Build & maintain data pipelines to support large scale data management in alignment with data strategy and data processing standards
- Experience in designing efficient and robust ETL workflows
- Experience in Database programming using multiple flavors of SQL
- Deploy scalable data pipelines for analytical needs
- Experience in Big Data ecosystem – on-prem (Hortonworks/MapR) or Cloud (Dataproc/EMR/HDInsight)
- Worked on query languages/tools such as Hadoop, Pig, SQL, Hive, Sqoop , and SparkSQL.
- Experience in any orchestration tool such as Airflow/Oozie for scheduling pipelines
- Scheduling and monitoring of Hadoop, Hive and Spark jobs
- Basic experience in cloud environments (AWS, Azure, GCP)
- Understanding of IN memory distributed computing frameworks like Spark (and/or DataBricks) and its parameter tuning, writing optimized queries in Spark
- Experience in using Spark Streaming, Kafka and Hbase
- Experience working in an Agile/Scrum development process
REQUIRED QUALIFICATIONS:
We are looking for individuals who are curious, excited about learning and navigating through the uncertainties and complexities that are associated with growing a company. Some qualifications that we think would help you thrive in this role are:
- BE/BS/MTech/MS in computer science or equivalent work experience
- 2 to 4 years of experience in building data processing applications using Hadoop, Spark and NoSQL DB and Hadoop streaming
PREFERRED QUALIFICATIONS:
- Exposure to the latest cloud ETL tools such as Glue/ADF/Dataflow is a plus
- Expertise in data structures, distributed computing, manipulating and analyzing complex high-volume data from a variety of internal and external sources
- Experience in building structured and unstructured data pipelines
- Proficient in a programming language such as Python/Scala
- Good understanding of data analysis techniques
- Solid hands-on working knowledge of SQL and scripting
- Good understanding of in relational/dimensional modeling and ETL concepts
- Understanding of any reporting tools such as Tableau, Qlikview or PowerBI
For Data Engineer-Senior Associate:
Experience Required: 4-6 Years
CTC: INR 10 Lto INR 18 L
Job Location: Bangalore
Job Opening: 10
ROLE DESCRIPTION:
As a data engineer, you’ll have an opportunity to work on the universe of data and solve some very interesting problems by creating and maintaining scalable data pipelines dealing with petabytes of data. All our projects entail working on cutting edge technologies, petabyte-scale data processing systems, data warehouses and data lakes to help manage the ever-growing information needs of our customers.
The responsibilities are detailed as below:
- Experience in understanding and translating data, analytic requirements and functional needs into technical requirements while working with global customers
- Build and maintain data pipelines to support large scale data management in alignment with data strategy and data processing standards
- Experience in Database programming using multiple flavors of SQL
- Deploy scalable data pipelines for analytical needs
- Experience in Big Data ecosystem – on-prem (Hortonworks/MapR) or Cloud (Dataproc/EMR/HDInsight)
- Worked on query languages/tools such as Hadoop, Pig, SQL, Hive, Sqoop and SparkSQL.
- Experience in any orchestration tool such as Airflow/Oozie for scheduling pipelines
- Exposure to the latest cloud ETL tools such as Glue/ADF/Dataflow
- Understand and execute IN memory distributed computing frameworks like Spark (and/or DataBricks) and its parameter tuning, writing optimized queries in Spark
- Hands-on experience in using Spark Streaming, Kafka and Hbase
- Experience in the latest cloud ETL tools such as Glue/ADF/Dataflow
- Experience in any orchestration tool such as Airflow/Oozie
- Experience working in an Agile/Scrum development process
REQUIRED QUALIFICATIONS:
We are looking for individuals who are curious, excited about learning and navigating through the uncertainties and complexities that are associated with growing a company. Some qualifications that we think would help you thrive in this role are:
- BE/BS/MTech/MS in computer science or equivalent work experience.
- 4 to 6 years of experience in building data processing applications using Hadoop, Spark, and NoSQL DB and Hadoop streaming
PREFERRED QUALIFICATIONS:
- Expertise in data structures, distributed computing, manipulating and analyzing complex high-volume data from a variety of internal and external sources
- Experience in developing ETL designs and data models for structured/ unstructured and streaming data sources
- Experience in building large scale data pipelines in batch and real-time mode
- Experience in data migration to cloud (AWS/GCP/Azure)
- Proficient in a programming language such as Python/Scala
- Good understanding of in relational/dimensional modeling and ETL concepts
- Good understanding of data analysis techniques
- Solid working knowledge of SQL and scripting
- Understanding of any reporting tools such as Tableau, Qlikview or PowerBI
How to apply TheMathCompany Recruitment Drive 2020 for the role of Data Engineer?
Apply Link of TheMathCompany Jobs 2020 & for More Details: Click Here
This notification is for the recruitment of Data Engineer from TheMathCompany. You can apply through the given link for the profile of the Data Engineer.
Important Note: Candidate must read all the instructions and requirements carefully while applying for the job. You have to fill all the required fields and all the communications from the company will be on your registered Email ID. Keep checking your email for the next round once the Resume is shortlisted.
Seekajob is a job-sharing platform for all Job Seekers. We do not charge any cost and service fee for any job which is posted on our website, neither we have authorized anyone to do the same. We are providing job links from the careers pages of the organizations. Applicants are advised to check all the details when they are applying for the job to avoid any inconvenience.