National Labor Exchange Veterans Jobs

USNLX Veterans Careers

Job Information

Kroll, LLC Senior Data Engineer - 21009870 in Atlanta, Georgia

In a world of disruption and increasingly complex business challenges, our professionals bring truth into focus with the Kroll Lens. Our sharp analytical skills, paired with the latest technology, allow us to give our clients clarity-not just answers-in all areas of business. We value the diverse backgrounds and perspectives that enable us to think globally. As part of One team, One Kroll, you'll contribute to a supportive and collaborative work environment that empowers you to excel.

At Kroll, your work will help deliver clarity to our clients' most complex governance, risk and transparency challenges. Apply now to join One team, One Kroll.

RESPONSIBILITIES: Participate in Scrum teams of data and analytics engineers who build, manage, and support enterprise data and analytics technology infrastructure, tools, and products. Accelerate building and improving capabilities centered around data engineering that focuses heavily on building foundational data assets, data engineering pipelines, data platform initiatives, and data product development. Accountable for delivery of business unit and Internal Firm Services data that will be made available through the Kroll Connected Ecosystem Build cross-functional relationships with Business product owners, data engineers, data scientists, and analysts to understand product needs and delivery expectations. Guide the adoption of new products, platforms, and data assets to improve data-informed operations across the organization. Build and grow data engineering capabilities that deliver performance solutions that drive customer value and business outcomes. Establish a product mindset and cross-functional team structure to support the strategy and fast-paced delivery of quality solutions. Understand the cloud ecosystem, markets, competition, and user requirements in-depth. Help facilitate to Launch of new products and features, test their performance, and iterate quickly. Build scalable functions, fault-tolerant batch and real-time data pipelines to validate/extract/transform/integrate large-scale datasets, inclusive of location and time series data, across multiple platforms. Optimize and expand Kroll's data platform, which will span across multiple business units, cloud providers, services, data warehouses, and applications. Help facilitate setting up an enterprise-level data strategy, including Governance, Data Architecture, Big data analytics, Delivery leadership, Knowledge of Automation Lead Cross-functional teams across the Globe.

REQUIREMENTS: Bachelor's degree with Minimum 3 years of overall experience, with hands-on experience in setting up enterprise-level data lakes using any big data platform (Azure Data Lake and/or Databricks a plus). Able to understand and guide teams in the implementation of Data Lake and ELT concepts in the Cloud using Databricks, Azure Data Factory, Python, C#, GraphQL, PySpark, Pandas, etc. Expertise with Azure and Databricks in the Data lifecycle and AI domain -Data Migration, Data Transformation, Data Modernization, Modern Data Warehouse, Analytics, Azure ML, etc. A deep understanding of data architecture, data security, and modern processing techniques using data pipelines. Must be able to interpret and analyze large sets of data for complex business situations and understand the implications for the team. Experience with key platform technologies including APIs and Management, Platform Services, Streaming Systems, Stream Processing, and Persistent Storage for Analytics and Applications at the Enterprise level. Practical experience deploying applications and implementing continuous-integration tools and patterns. Prior experience with Analytics and BI tools like Qlik, and PowerBI for reporting and streaming or messaging technologies like Kafka, Amazon Kinesis, or SNS. Relevant experience guiding teams on DevOps, analyzing applications, and cloud environment performance. Relevant experience that leverages scientific method , processes, algorithms, and systems to discover business insights from structured and unstructured financial datasets is a huge plus. Quick to understand business needs and learn domain-specific knowledge. Familiarity with financial statements and valuation methodologies/metrics preferred.

DESIRED SKILLS: Experience working as a Data Engineer with ETL/ELT using various Cloud technologies. Experience with Python/ PySpark, Scala. Experience with MS Office suite including Python/co-pilot integrations. Experience in designing ER diagrams, database architecting. Experience of relational and non-relational databases. (Oracle, SQL, PostgreSQL, CosmosDB, DynamoDB). Comfortable in various flavors of SQL. Must have used the Databricks platform in Lakehouse Implementation. Knowledge of Python libraries like Pandas, Numpy, spaCy, or NLTK. Relevant experience in using Jenkins or Synapse for workflow scheduling. Prior experience in CloudFormation and/or Terraform for Code Deployment and Integration.

In order to be considered for a position, you must formally apply via careers.kroll.com.

Kroll is committed to creating an inclusive work environment. We are proud to be an equal opportunity employer and will consider all qualified applicants regardless of gender, gender identity, race, religion, color, nationality, ethnic origin, sexual orientation, marital status, veteran status, age or disability.

The current salary range for this position is $60,000 - $150,000.

#LI-CN1

#LI-HybridIn a world of disruption and increasingly complex business challenges, our professionals bring truth into focus with the Kroll Lens. Our sharp analytical...

Equal Opportunity Employer - minorities/females/veterans/individuals with disabilities/sexual orientation/gender identity

DirectEmployers