By: Adam Baker
4 important considerations when designing the visualization of a monitoring system
The point of a SCADA system is to make site data easy to visualize. But the HMIs in solar are hardly simple. Indeed, they often lack aspects that would make an operator’s job much easier. The data represented is often left up to interpretation.
But that’s not the point of the data collected by SCADA at all.
Over my 20 years working in solar instrumentation and controls, I’ve put together a little wish list of how I would design the best possible HMI, if I developed and owned my own utility-scale solar plant. The idea is to design an HMI that gives me the minimum data I need to make intelligent decisions about my plant operation in the easiest format to interpret.
As you’re looking to redesign your SCADA system, consider the following as upgrades to your current HMI situation.
Design for Operations Visibility
For maximum efficiency, solar plants must make collected information easily readable by operators. The following are big problems facing SCADA users as they interact with their HMI.
Lack of Options Leads to Incorrect Alarming Practices
Operators have a love/hate relationship with alarms. They’re necessary to identify critical issues, but man they can be annoying. Operators need advanced alarming capabilities built into SCADA, such as the ability to relate child alarms to parent alarms to avoid an alarm avalanche.
For example, when a switchgear feeder is tripped for maintenance, I don't need an alarm to tell me every inverter on that feeder has a communications loss. The parent alarm should suppress the child alarm notifications where there is a dependency.
Another pet peeve of mine is the lack of differentiation between alarms and events. Alarms should be for ACTIONABLE conditions. (A fuse is blown, a filter needs replacement, a high temperature in an inverter should be investigated). To avoid alarm apathy, log as EVENTS the things you don't need to investigate immediately, but need to keep a log of (ambient air temperature is high, a door was opened, the trackers went into wind stow position). These events are still important, and need to be logged, but what's the operator supposed to do as a corrective action if the air temperature is high?
Lack of Normalized Data Leads to Missed or Incorrectly Analyzed Performance Data
The pictures to the right both show a slightly different view of the same raw data. The difference is that the picture on the bottom shows the data being normalized to
a percentage of ideal rather than the actual measurement. While looking at the picture on top, can you tell which inverter is underperforming? How about when looking at the normalized data? By basing graphs on percentages, rather than raw numbers (data normalization), it’s easier for operators to quickly identify power losses.
I’ve already discussed both these subjects in length. Please refer to the links to read more in depth about them.
Real-Time Heat Map Allows for Quick Site Analysis
Power loss is hard to find, especially because losses early in a project are persistent over time.
A heat map is a great quick way to quickly visualize power output at your site. A single operator can effectively look over thousands of MW and understand exactly what is occurring at each site.
In order to create an effective heat map, different approaches need to be applied to different measurements. For example, I have used a context in which an inverter output ranged from white (0kW output) to blue (full output on a clear sky day). With this context, and the inverters laid out as they are in the field, it's pretty easy to see a cloud moving across the site, where an underperforming inverter will be a lighter shade than those around it persistently.
Alternatively, to visualize transformer oil temperatures around a site, I'd scale the colors from white to blue based on the lowest reading to the highest reading, where we should expect an even distribution of colors in between. If one device has a much different value than the rest (due to bad sensor, or actual problem), then one will show up as an extreme, and the rest will be pretty close to the other extreme.
The color can quickly be analyzed to see which arrays or inverters (depending on the site size) are offline, underperforming, or covered by a cloud. If you have a relatively consistent color distribution, you can assume everything is balanced. If one square is an extremely light or dark color, it’s time to investigate.
The downside to this approach can be operator colorblindness, where an operator's ability to analyze heat maps may be difficult with subtle differences in shading, which is why the same data is provided in bar graphs on the same screen below.
Analyzing DC/AC Ratio to Determine Clipping
If your site was built with a 1.4 DC/AC ratio, but it’s running at 1.2, how would you ever know? Not by analyzing your power output, as technically a 1.2 ratio is still reaching maximum output.
If one 1MW inverter has 1.2MW of DC behind it, but a second inverter has 1.3MW of DC, both inverters will peak at 1MW. But the smaller array will get to clipping later and leave earlier than the array with more DC, making the energy from the two inverters differ slightly, even though they will both reach max power. The higher the DC/AC ratio, the wider the clipping window grows, and narrower the ramping time gets.
I’d love a metric that provides the effective DC/AC ratio to help me understand the health of my DC collection system. This is a very difficult metric for someone who isn’t highly knowledgeable in how PV solar works, which is why you don’t see this on very many solar HMIs. The other challenge in calculating this metric is that on a day that has clouds, the time the site enters clipping will not necessarily be indicative to the true ratio, but over a week or two, the relative performance of one inverter to the next is comparable, even if not correct in the absolute.
Automatic Platform Conversions
From a control systems standpoint, whether we’re talking about a string or central inverter or a 20MW or 2.5MW site, the data you’ll spend most time monitoring is the same. Three phases of voltage, three phases of current, DC voltage, KW, etc.
But there’s a big problem for control system integrators, and that’s the lack of standardization among brands/platforms.
Whether it’s due to longer-than-normal lead times, or because one EPC has better pricing for a ABB inverter compared to SMA, the mix of devices that show up on one site to the next is HIGHLY variable, and as a result, integrating them into control and monitoring is a constant battle to figure out what's different.
For each platform, different registers are used. Some inverters accept commands in “% output”, others in “KW output”. Others change scaling from hundred to thousand. Lack of standardization means mapping new devices within the SCADA system is an absolute pain for integrators.
In an ideal world, control and SCADA systems should be adaptable to fit and change between different types of devices at the click of a button.
The good news is, the information useful to SCADA for monitoring is consistent across any platform.
In the beginning, an integrator like Affinity Energy would write a code that maps in each device’s specific language. By creating different mapping routines for every new device, it would allow the controls and SCADA to switch without significant modification. It would be as simple as the installer selecting Schneider ION 8650 or SEL 735 or Electro Industries Shark 200 inside the SCADA screen to convert data from the device into something the SCADA screen could read. And it wouldn’t involve days of integrator time writing new code for each change in equipment.
Make an Operator’s Job Easier, Not Harder
The point of SCADA is to provide owners and operators the data they need to manage their site properly. Nobody said reading the data provided should be a difficult task. Too many integrators provide the data from the system without thinking through the most efficient way of turning the data into information that is easy to analyze visually, and at a glance.
By specifying the HMI in a way that makes data analysis brainless, identifying and solving problems associated with that data is just as simple.
Have you asked your operator what features they wish they had to make their job easier? Have you sat down with your integrator to figure out how to implement those features? If you need some help analyzing your current HMI, I’d be happy to help.
Adam Baker is Senior Sales Executive at Affinity Energy with responsibility for providing subject matter expertise in utility-scale solar plant controls, instrumentation, and data acquisition. With 23 years of experience in automation and control, Adam’s previous companies include Rockwell Automation (Allen-Bradley), First Solar, DEPCOM Power, and GE Fanuc Automation.
Adam was instrumental in the development and deployment of three of the largest PV solar power plants in the United States, including 550 MW Topaz Solar in California, 290 MW Agua Caliente Solar in Arizona, and 550 MW Desert Sunlight in the Mojave Desert.
After a 6-year stint in controls design and architecture for the PV solar market, Adam joined Affinity Energy in 2016 and returned to sales leadership, where he has spent most of his career. Adam has a B.S. in Electrical Engineering from the University of Massachusetts, and has been active in environmental and good food movements for several years.