Leveraging RKE to Normalize Data from Over 30,000 OT Devices

Introduction

Logistics plays a crucial role in the success of businesses across various industries. It involves the coordination and management of goods, services, and information from point of origin to consumption. This case study focuses on a logistics company which faced challenges in its daily operations, such as rider management to drive accuracy, efficiency and seamless operations to support its e-commerce and q-commerce clients.

Challenges

Challenges Full Width

Over 30,000 cross vendor OT devices generated massive volumes of data, with varying formats and structures that needed normalization.

Challenges Full Width

Sending all data to the cloud for processing would have resulted in high latency and potential data loss.

Challenges Full Width

Edge devices had limited computing resources compared to centralized cloud servers, requiring efficient orchestration and resource usage.

Challenges Full Width

The solution needed to scale horizontally to accommodate future integrations and increased data volume.

Solution Approach

  • List Marker RKE was installed on edge datacenter to provide a consistent Kubernetes platform for orchestrating containerized applications.
  • List Marker The small footprint of RKE ensured low resource usage, making it suitable for resource-constrained environments.
  • List Marker A containerized data pipeline was deployed on the RKE clusters to discover, ingest, preprocess, and normalize OT data locally.
  • List Marker The pipeline utilized various microservices for data ingestion, transformation, and storage.
  • List Marker OT devices communicated directly with the edge computing nodes, reducing data transmission latency.
  • List Marker A buffer mechanism handled intermittent connectivity, ensuring that no data was lost during network outages.
  • List Marker Incoming data was normalized into a consistent format through preprocessing microservices.
  • List Marker Aggregation microservices further reduced data size by filtering noise before sending to centralized servers for analysis.
  • List Marker The edge computing solution was designed to scale horizontally by adding more nodes as needed.
  • List Marker Centralized monitoring and logging allowed the operations team to track edge node performance and quickly respond to issues.
Challenges Full Width

Result

Solution Icons

Local data preprocessing and aggregation at the edge reduced the overall volume of data transmitted to the cloud.

Solution Icons

On-premises data normalization minimized latency, allowing faster decision-making for time-sensitive operations.

Solution Icons

Local data preprocessing and aggregation at the edge reduced the overall volume of data transmitted to the cloud.

Solution Icons

On-premises data normalization minimized latency, allowing faster decision-making for time-sensitive operations.

Solution Icons

Local data preprocessing and aggregation at the edge reduced the overall volume of data transmitted to the cloud.

Solution Icons

Conclusion

The enterprise successfully leveraged Rancher Kubernetes Engine for efficient edge computing, enabling data normalization and aggregation from over 30,000 OT systems. The solution, which was processing 1500 messages per second, minimized data transmission latency and improved resilience while providing a scalable and consistent platform for containerized applications. This approach set a foundation for future expansion and optimized data management at the edge.

Let's Do Something Great Together!

As they say, it takes two to tango! Just tell us your specific needs and we will come up with an innovative solution that will not only meet your objectives but will also help you set apart from your competitors.

Free Consultancy Service