PACCAR, Inc. is a Fortune 500 company established in 1905 and is recognized as a global leader in the commercial vehicle, financial, and customer service fields. PACCAR is a global technology leader in the design, manufacture, and customer support of high-quality light-, medium- and heavy-duty trucks under its internationally recognized brands Kenworth, Peterbilt and DAF nameplates. PACCAR designs and manufactures advanced diesel engines and provides customized financial services, information technology and truck parts related to its principal business.
PACCAR Financial (PFC) facilitates the sale of premium-quality PACCAR vehicles in 20 countries on three continents worldwide by offering a full spectrum of creative, flexible financial products and value-added services specifically tailored to the transportation industry.
Does empowering teams to make data driven decisions excite you? Do you wake up in the morning wondering what possibilities could be unlocked with more data? PACCAR Financial is looking for a seasoned data engineer with AWS experience to join the team. Data Engineering focuses on making possible fast, accurate, and reliable access to data. We build data pipelines, manage a data warehouse, and support the production use of our data. We advocate for good data practices and make sure that our business users are able to make good data driven decisions.
Job Functions / Responsibilities
- Work with business users and application developers/architects to understand data requirements, definitions and business rules
- Create conceptual and logical data models that accurately reflect these requirements in a way easily understood by business users and development teams
- Work with development teams to create sound physical data designs that reflect the project architecture and choice of data/database technology
- Implement data structures on a variety of database platforms, including SQL Server, Oracle, Teradata and Snowflake
- Work with development teams to create database objects (views, functions, stored procedures) that improve application performance, functionality and scalability
- Build data pipelines (including data migration from legacy data sources, cleansing and transformation), data validation frameworks, and job schedules with emphasis on automation and scale
- Contribute to overall architecture, framework, and design patterns to store and process high data volumes
- Ensure product and technical features are delivered to spec and on-time
- Design and implement features in collaboration with product owners, reporting analysts / data analysts, and business partners within an Agile / Scrum methodology
- Proactively support product health by building solutions that are automated, scalable, and sustainable be relentlessly focused on minimizing defects and technical debt
Qualifications and Skills
- 3-5 years of experience in large-scale software development (preferably Agile) with emphasis on data modeling and database development
- 3-5 years of experience with data modeling tools (Erwin, ER/Studio, PowerDesigner)
- 3-5 years of experience with relational DBMSs and SQL coding (SQL Server, Oracle, Teradata, Snowflake)
- Ability to communicate effectively (both orally and in writing) with business users, project team leaders and application developers
- Experience participating in Agile/Scrum projects in a highly collaborative, multi-discipline team environment
- Proficiency with ETL tools and techniques (SSIS, Attunity, Informatica)
- 2+ years of experience with AWS and related services (EC2, S3, DynamoDB, ElasticSearch, SQS, SNS, Lambda, Airflow, Snowflake, etc.)
- Experience with object function/object-oriented scripting (Python, Java, C++, Scala)
- Experience in R Programming