The Data Platform Management team has been being established in the Group IT team at Optus to help realise the vision of becoming a customer-centric organisation, driven by a data and analytics capability that enhances customer interactions and revenue generation.
The Big Data Engineer is responsible for the development and automation of Big Data ingestion, transformation and consumption services; adopting new technology; and ensuring modern operations in order to deliver consumer-driven Big Data solutions.
Implement request for ingestion, creation, and preparation of data sources Develop and execute jobs to import data periodically/ (near)
real-time from an external source Setup a streaming data source to ingest data into the platform Delivers data sourcing approach and data sets for analysis, with activities including data staging, ETL, data quality, and archiving Design a solution architecture to meet business, technical and user requirements Profile source data and validate fit-for-purpose Works with Delivery lead and Solution Architect to agree on pragmatic means of data provision to support use cases Understands and documents end-user usage models and requirements
We offer all kinds of benefits, such as:
Onsite facilities at Macquarie Park such as a Gym, GP, Mini-Mart, Cafes Training, Mentoring, and further learning opportunities Staff busses to Epping and Wynyard, and back again
Preferred skills and experience include:
Bachelor's degree in maths, statistics computer science, information management, finance or economics 8+ years' experience working in Data Engineering and Warehousing. 3 -5 years' experience integrating data into analytical platforms Experience in ingestion technologies (e.g. Sqoop, NiFi, flume), processing technologies (Spark/Scala) and storage (e.g. HDFS, HBase, Hive) Experience in data profiling, source-target mappings, ETL development, SQL optimisation, testing and implementation Expertise in streaming frameworks (Kafka/Spark Streaming/Storm) essential Experience in building Microservices, Rest APIs, Data as a Service architectures. Experience managing structured and unstructured data types Experience in requirements engineering, solution architecture, design, and development / deployment Experience in creating big data or analytics IT solution Track record of implementing databases and data access middleware and high-volume batch and (near) real-time processing
You would be a self-starter with the ability to work independently and multitask several different activities across critical deadlines in a high-pressure environment.
Impress this employer describing Your skills and abilities, fill out the form below and leave Your personal touch in the presentation letter.
At Canva, we’re transforming the way the world designs by building a suite of easy-to-use design tools (for graphics, animation, video, and print) in over 100 languages, and across browser-based and [...]
MORE ABOUT THIS JOB ENGINEERING What We Do At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve [...]
MORE ABOUT THIS JOB What We Do At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most cha [...]
Apply via our Job Portal: https://adr.to/4uwy2 CyberCX is Australia’s leading independent cyber security consultancy organisation. To support our rapid growth, we are looking for AWS DevOps Engine [...]