Energy Data Platform?
Utilities have lots of data, not much of it is easily available, synchronized, and in the same place. Rich situational awareness, outage management, real-time performance monitoring, DER control and analytics, among myriad other use-cases that aid reliability and resiliency are only effective if you have all your data sources linked together and feeding data into a centralized system. An Open Energy Data Platform acts as a data warehouse, which gives you the flexibility to integrate data sources easily, whether they’re from generation, consumption, or both (V2G), in whatever format, at whatever frequency, and whatever device. It performs heavy lifting to prepare, cleanse, and synchronize all data sources together, while making the data rapidly accessible to authorized users and systems.
“The whole is greater than the sum of its parts” – Aristotle
What’s the problem?
For the average utility, modern data reading devices now produce millions, if not billions of data points a day. Billions. Together, SCADA and AMI networks alone produce mind boggling amounts of data. To add to that, SolarPV, battery storage, Charging infrastructure (EVSE), Grid Edge appliances, and weather data each sit in a repository somewhere, often siloed from the other data sources, despite the fact that they’re inextricably linked and connected in the grid. These siloed data sources make performing analysis, designing algorithms and building applications time-consuming, highly complex and often result in being incomplete.
Let’s take an example
A utility data analyst, data scientist or distribution engineer wants to correlate DER output throughout the day with EV charging habits to understand the percentage of renewable energy powering EVs in the service territory. The analyst goes to the Jinko SolarPV database where generation information is stored, two storage metrology databases, sonnen and Generac, for data on battery device output, the SCADA historian for energy pulled into the relevant feeders, and upstream generation data to incorporate energy source renewable percentage from the transmission grid. Accessing this information is a complex, multi-faceted database extraction process, followed by lengthy synchronization and cleansing efforts to build datasets in the shape needed to perform the analytics. Additionally, sometimes the tools collecting this data provide pre-made visualizations of it, but have proprietary formats or do not provide direct access to the data itself, making further analytics even impossible.
The Open Energy Data Platform removes the need for this effort. The data platform has all relevant data integrated, secure and accessible only to authorized parties or employees. It enables rapid, scalable extraction of specific data, which arrives pre-synchronized, in a structured format, and which connects directly to your analytical tool of choice.
What features are needed to make a data platform like this work?
Data Model Design
The data must be provided in a form that is both specific to and universal across utility and industrial energy systems. A data model (or schema) has to be designed for this purpose.
Key Attributes
- Relational data model semantics specialized for time series and geospatial data.
- 360° view of data relationships and dependencies across datasets.
- Highly adaptable, scalable, portable and performant.
- Highly secure, with data governance policies built-in and automated.
Data Quality
The ingestion process into the platform must involve validation, estimation and error correction (VEE), and synchronization on all data as it is ingested and transformed to conform to the model.
Key Attributes
- Data validation, estimation, and error correction performed on ingested data.
- Identification of gaps in data and location of data blind spots.
- Synchronization of time series data sources to ensure consistency.
Data quality is a crucial factor when analyzing and operating energy systems. Systems responsible for reliably operating an electrical grid, such as SCADA, OMS and ADMS heavily rely on accurate data. Automating data quality improvement across all data sources ensures that these systems function in the way they are designed.
Accessing Data
Accessing data from the platform must be a seamless, repeatable process. It has to save time, and remove the need to manually request and wait for data from IT.
Key attributes
- Easily queryable data model.
- Possible to easily aggregate and combine data sources.
- Predefined functions for “lighter lifting”.
- SQL support.
- RESTful API support.
- [Optional] Direct support for other programming languages (e.g. Python).
Accessing data must be underpinned by highly secure data protection and cybersecurity policies. While the data model is public, the data itself is not; the word “open” means the utility is not locked out of their own data by vendors.
Gradual data onboarding
Data integration shouldn’t be a bottleneck. An energy data platform should allow users to grow the number of data sources feeding into the system over time. Starting small, focusing on the most important data sources first, and growing those sources over time decreases time to value, minimizes growing pains, and promotes a more agile, iterative approach to data integration.
Key attributes
- Ability to layer in data sources over time. Integrate data from time-series measurement devices, such as DER, EVSE, and others when it makes sense.
- Start with a limited number of data sources initially.
- Must be sensor and device agnostic.
- Provides rapid time to value with any number of data sources being ingested.
Connect your tools
The platform should be capable of allowing connections directly from the applications you use to perform analytics, design algorithms, and operate the energy system.
Key attributes
- Connectors for Notebooking tools – Jupyter, Zeppelin, RMarkdown Notebooks.
- Connectors for Business Intelligence tools – PowerBI, Quicksight and Tableau.
- Other systems, such as OMS, DERMS, etc.
How does this translate to benefits?
The benefits of having this type of platform, designed specifically for energy and power delivery, is that there’s no longer a need to fit a square peg in a round hole. Data repositories designed for “every” use-case, across “every” industry sacrifice depth for breadth. Maximum benefits cannot be achieved when data models are not designed specific to certain datasets.
It is crucial to optimize your data repository and data model for energy systems and energy data. Only then can you guarantee that the information your teams and systems need in order to analyze and operate the grid is useful. It arrives as cleansed data, in optimal format, is rapidly accessible, and can easily be used to perform analytics, design algorithms, and build applications.
Awesense practices these policies, helping our utility and industrial customers prepare for the future of energy.