At SAC LABs we have deployed edge computing technology in harsh norther environments for more than 5 years using off the shelf components and basic engineering science to protect the temperature sensitive electronics.
By combining WioLink IoT environment modules with a RaspberryPi 4 for onsite compute and a Ubuntu/NodeRed stack for data collection, we sent our collected data to MongoDB Atlas cluster for storage and analytics.
These solutions form a designing complex that can be employed to remote locations that cannot be accessed onsite frequently. They can be deployed broadly over a large geographic area and interconnected through WiFi, LTE of Satellite uplinks to provide broad geographic data collection.
The underlying software stack can be managed using DevOps CI/CD process workflows to deploy new collection capabilities from a central location to the edge without onsite intervention.
DevOps workflow for SAC modules:
- Automation: Automate workflows, code testing, and infrastructure provisioning to cut down on overwork.
- Iteration: Using small chunks of proven code during sprints to support the speed and frequency of deployments.
- Continuous improvement: Continuously test and learn from previous failures in order to act on feedback to optimize performance, cost, and time to deployment.
- Collaboration: Unite teams and foster communication to maintain high quality.
Redundancy and Resiliency Model
Using separate dedicated modules (ex: compute, sensor, battery and communications) increases reliability, remote diagnostic and self healing capabilities of the SAC module. Building multiple resources that serve the same function allows for the ability to fail-over modules to other modules in the event of the loss of a single primary resource.
Our Resiliency model increases both remote diagnostic intelligence and the longevity of individual SAC modules. By being able to test and diagnose primary systems and read out fault codes from the redundant system, we are able to avoid unnecessary onsite maintenance for low battery or false failed component assumptions. With in-cloud data, we can see when the last data was read out, and the last status of each module. In the event of low solar re-charge in conjunction with low temperatures, we can determine that when weather conditions improve the primary systems will self heal with no immediate need for onsite response.
- Autonomy: Multi-module redundancy, diagnostic and fail-over
- Stateless: Distributed services that can be independently managed
- Observability: Intelligent alerting and monitoring using dedicated tracing and continuous reporting.
- Reliability: High service availability and redundancy, with remote, onsite or automated backup and recovery.