Data Engineer

February 20 2025
Expected expiry date: February 20 2025
Industries Mining, Petroleum, Natural Gas
Categories Data Business Analyst,
Remote
Toronto, ON • Full time

Start Date ASAP

Hybrid Work Environment (3 days in office, 2 days remote with flexible hours)

Dress Code Business Casual

Location Downtown Toronto, Outside of Union Station (TTC & GO accessible)

A Great Place to Work

Who We Are

Kinross is a Canadian-based global senior gold mining company with operations and projects in the United States, Brazil, Mauritania, Chile and Canada. Our focus on delivering value is based on our four core values of Putting People First, Outstanding Corporate Citizenship, High Performance Culture, and Rigorous Financial Discipline. Kinross maintains listings on the Toronto Stock Exchange (symbol:K) and the New York Stock Exchange (symbol:KGC).

Mining responsibly is a priority for Kinross, and we foster a culture that makes responsible mining and operational success inseparable. In 2021, Kinross committed to a greenhouse gas reduction action plan as part of its Climate Change strategy, reached approximately 1 million beneficiaries through its community programs, and recycled 80% of the water used at our sites. We also achieved record high levels of local employment, with 99% of total workforce from within host countries, and advanced inclusion and diversity targets, including instituting a Global Inclusion and Diversity Leadership Council.

Eager to know more about us? Visit Home - Kinross Gold Corporation

Purpose of Role

Reporting to the Director of Development, Integration and Analytics, the incumbent will be a key member of the IT team, focusing primarily on data engineering, but also assisting with data architecture and management processes.

This person plays a critical role in enabling the organization to leverage data effectively for decision-making and strategic initiatives by ensuring the availability, reliability, and quality of data. In-depth knowledge on data processing, data modelling, data products for integration and visualization is required.

Job Responsibilities

Data Engineering (70%)

  • Assist with the design and implementation of data pipelines to extract, transform, and load (ETL) data from various sources into storage systems (e.g., data warehouses, data lakes). This involves understanding the data sources, defining data extraction methods, and ensuring the integrity and quality of the data throughout the process.
  • Integrating data from multiple sources and systems to create a unified view of the data landscape within the organization. This involves understanding data formats, protocols, and APIs to facilitate seamless data exchange and interoperability between systems.
  • Developing algorithms and scripts to clean, preprocess, and transform raw data into a format suitable for analysis and reporting. This may involve data normalization, denormalization, aggregation, and other data manipulation techniques.
  • Contribute to the design of data models and schemas to represent the structure and relationships of the data stored in databases. This involves understanding the business requirements and data usage patterns to design models that support efficient data access and analysis.
  • Conducting data quality checks and validation processes to ensure the accuracy, completeness, and consistency of the data. This includes identifying and resolving data anomalies, errors, and discrepancies to maintain data integrity and reliability.
  • Documenting data engineering processes, workflows, and systems to facilitate knowledge sharing and collaboration within the Data and Analytics team and across the organization. This includes creating documentation for data pipelines, data models, and infrastructure configurations.
  • Provide day-to-day support and technical expertise to both technical and non-technical teams.

Data Architecture and Data Management (20%)

  • Assist with management of databases and data storage solutions to store and organize structured and unstructured data effectively. This includes database design, configuration, optimization, and performance tuning to ensure efficient data retrieval and processing.
  • Assist with managing the infrastructure and resources required to support data processing and analysis, including servers, clusters, storage systems, and cloud services.
  • Collaborate with our CyberSecurity and Data Management teams to ensure data security, compliance, and governance standards are consistently met across the data platform, adhering to global data engineering standards and principles.
  • Assist with creating, documenting, and improving repeatable patterns for accessing specific data sources, such as: repeatable playbook for consuming 3rd party APIs.

Data Science (10%)

  • Contribute to data science initiatives by identifying opportunities, collecting and preparing data, and setting up data pipelines.
  • Prepare the exploratory data analyses, proof-of-concept modeling, and business cases necessary to generate partner consensus and internal support for potential opportunities.

Minimum Qualifications and Experience

  • A bachelor's degree in computer science, statistics, information management, or a related field; or an equivalent combination of training and experience.
  • At least 2 years post degree experience with design, implementation, and operationalization of large-scale data and analytics solutions.
  • A strong background in technology with experience in data engineering, data warehousing and data product development.
  • Strong understanding of reporting and analytics tools & techniques.
  • Strong knowledge of data lifecycle management concepts.
  • Excellent documentation skills, including workflow documentation.
  • Ability to adapt to a fast-paced, dynamic work environment.

Required Technical Knowledge

  • Expertise designing and implementing ETL/ELT processes using Cloud Platforms, including data extraction from various sources, data transformation, and loading data into target systems such as data lakes or warehouses.
  • Solid foundation in SQL and able to write and optimize SQL queries efficiently to manipulate and analyze data effectively.
  • Strong experience with Python for data processing, analysis, and automation, with the ability to write efficient and scalable code.
  • Proficient in developing and executing notebooks using languages like Python, Scala, or SQL. Experience with notebook features such as interactive visualization, Markdown cells, and magic commands.
  • Experience with Source Code management tooling such as Git or Azure DevOps as well as a strong understanding of deployment pipelines using services such as Git Actions, Azure DevOps, or Jenkins

Nice To Have

  • Solid understanding of the Databricks platform, including its core components, such as Databricks Runtime, Databricks Workspace, Catalog, and Databricks CLI. Comfortable navigating the Databricks environment and performing common tasks such as creating clusters, notebooks, and jobs.
  • Experience with Spark DataFrame API, Spark SQL, and Spark MLlib for data processing, querying, and machine learning tasks. Able to write efficient Spark code to process large volumes of data in distributed environments.
  • Familiarity with the integration between Databricks and other Azure services, such as Azure Blob Storage for data storage, Azure Data Lake Storage (ADLS) for data lakes, Azure SQL Database or Azure Synapse Analytics for data warehousing, Azure Key Vault for secrets management, and Azure Event Hubs.
  • Understanding the basics of Azure services, including Azure Virtual Machines, Azure Storage (Blob Storage, Data Lake Storage), Azure Networking (Virtual Network, Subnetting), Azure Identity and Access Management (Azure Active Directory, Role-Based Access Control), and Azure Resource Management.

Required Behavioral Competencies

  • Communication - demonstrated strength in communicating with internal and external customers, vendors, project teams and senior management. Strong ability to build relationships, work collaboratively, and resolve problems with people at all levels of the organization.
  • Flexibility - the ability to adapt to changing conditions and priorities, to use feedback from the team and the broader organization to change course if it is deemed necessary.
  • Innovation - willingness to embrace new, improved and unconventional ways to address business and technical issues and opportunities.
  • Accountability - ownership of what is being worked on as well as willingness to take credit, accept and learn from failures when applicable.
  • Travel - Willingness to travel up to 10%.

Apply now!

Similar offers

Searching...
No similar offer found.
An error has occured, try again later.

Jobs.ca network