Nitya Software Solutions

AI Consultant

Hammond, IndianaContract
$70+ hourly
About the Job
Position Overview:

The ideal candidate is highly self-driven, requires minimal oversight, and has a proven track record of translating business requirements into production-grade cloud analytics solutions. This is a hands-on technical role - you will design, build, and deliver, not only advise.

Key Responsibilities:

The contractor will own end-to-end delivery across the following areas:

Data Analytics Execution:
  • Design, develop, and deploy cloud-native analytics solutions supporting NALO operational KPIs (order fulfillment, inventory accuracy, throughput, No-Touch Order rates).
  • Build and maintain self-service dashboards and reporting layers for warehouse, logistics, and leadership stakeholders.
  • Translate business and operational data questions into structured analytics products with clearly defined refresh cadence, ownership, and SLAs.

Data Ingestion & Pipeline Engineering:
  • Architect and implement scalable data ingestion pipelines connecting SAP EWM, WMS operational data, IoT/sensor feeds, and third-party logistics platforms to cloud data platforms.
  • Ensure pipeline reliability, data quality validation, and lineage documentation in accordance with Lilly data governance standards.
  • Apply best practices for batch, micro-batch, and event-driven ingestion patterns based on source system capabilities and latency requirements.

AI & Advanced Analytics:
  • Prototype and deploy AI/ML use cases aligned to NALO Vision 2030 targets - including demand sensing, anomaly detection, predictive maintenance, and labor optimization.
  • Partner with NALO Innovation Architect to evaluate and onboard AI tooling and frameworks appropriate for distribution and logistics contexts.
  • Document model assumptions, limitations, and performance metrics transparently for non-technical stakeholders.

Collaboration & Program Integration:
  • Integrate analytics deliverables with active NALO 2.0 program workstreams - including SAP EWM, TraceLink, Tulip, and Swisslog automation tracks.
  • Participate in Agile sprint ceremonies, maintain delivery visibility in Jira, and proactively surface blockers without requiring escalation.
  • Contribute to data product governance documentation, including data dictionaries, access controls, and lifecycle review artifacts.

Required Qualifications:
  • Requirement Detail Education Bachelor's degree in Computer Science, Data Science, Engineering, Information Systems, or a related quantitative discipline.
  • Experience 5+ years in data engineering, analytics, or applied AI roles; minimum 3 years in cloud analytics environments.
  • Cloud Platforms Hands-on experience with at least one major cloud platform (AWS, Azure, or GCP) - including managed data services (e.g., Redshift, Synapse, BigQuery, Databricks, Snowflake) Data Ingestion.
  • Demonstrated experience designing and operating data ingestion pipelines (ETL/ELT) using tools such as Azure Data Factory, AWS Glue, Informatica, dbt, Apache Kafka, or equivalent Analytics & BI Proficiency in SQL, Python (pandas, PySpark, or equivalent); experience delivering production dashboards in Power BI, Tableau, or similar platforms AI/ML Working knowledge of ML model development and operationalization (MLOps); experience with at least one major ML framework (scikit-learn, XGBoost, or similar).
  • Self-Direction Demonstrated ability to scope, prioritize, and deliver independently in ambiguous program environments without day-to-day supervision.

Preferred Qualifications:
  • Candidates who bring the following will be differentiated in evaluation:
  • Experience with SAP EWM, SAP BW/HANA, or similar ERP/WMS data environments - understanding of order, inventory, and movement data models.
  • Background in pharmaceutical, life sciences, or regulated manufacturing/distribution - familiarity with GxP data requirements.
  • Familiarity with Tulip Operations Platform, Swisslog automation systems, or TraceLink serialization platforms.
  • Experience implementing data mesh, data product, or domain-oriented data architecture patterns.
  • Knowledge of DCAM (Data Management Capability Assessment Model) or equivalent data governance frameworks.
  • Exposure to GenAI/LLM-based use cases in operational or enterprise settings.
  • Active certification in a cloud data platform (e.g., AWS Certified Data Analytics, Azure Data Engineer Associate, Google Professional Data Engineer, Databricks Certified).

What Success Looks Like:

This is not a staff augmentation role. The right candidate will operate as a peer contributor within the NALO Innovation team - bringing their own judgment on technical architecture, proactively identifying gaps in the data strategy, and delivering against program milestones with accountability. Within 90 days, the contractor should be able to:

  • Have an active data ingestion pipeline connected to at least one NALO operational data source in the cloud environment.
  • Deliver a functional analytics product (dashboard or model output) consumed by a NALO stakeholder.
  • Produce a documented data product brief for at least one NALO KPI domain.
  • Operate independently within the Agile delivery model, maintaining Jira hygiene and surfacing delivery risk proactively.