During the first decade of this century, advances in several disparate technologies set the stage for a substantial rise in autonomous and semi-autonomous remote intelligent-node deployments that continues to this day. This trend arose in applications as different as hydrological surveying, transportation tracking and management, and machine health. It also has kicked off a flurry of activities that have hastened the commercialization of the IoT.
The trend includes the emergence of:
• ultra-low-power microcontrollers with sophisticated power- and resource-management methods to optimize processing performance and peripheral interactions per unit dissipation.
• robust low-power short-haul wireless communication technologies.
• significantly more efficient power converters and power conversion architectures that maintain high efficiency down to low load currents.
• new components to capture what would otherwise be waste energy from systems or the environment.
On this last item, there can be two relationships between the sources of capturable energy and the loads they power. First, there are sources of waste energy that correlate well in time with the load’s energy use.
A real-world example is the wireless fuselage-stress monitoring system on the Boeing 787 Dreamliner. It reportedly uses thermal energy taken from the temperature difference between the aircraft’s skin and its interior. In this case, the availability of energy is nearly perfectly correlated with its use: The system reports data when the airplane is at altitude and the temperature difference is high. When the vehicle is on the ground, the temperature difference may be too low to power the wireless sensor nodes, but they aren’t in service on the ground anyway.
By contrast, wind power generation exemplifies nearly the extreme opposite end of the scale. In this case, captured energy may be used immediately, but energy demands don’t naturally correlate with windy intervals —- even for powering a weather station.
Engineers find it helpful to distinguish between the two types of energy extraction. Cases where energy is used as it’s extracted are called energy harvesting. Energy extracted and then used later is generally referred to as scavenged energy. It is useful to differentiate between the two ideas because the distinction has important consequences in the design process.
Designs can be relatively simple where the energy supply reliably exceeds worst-case peak power demand. But there needs to be a thorough characterization of the energy source and its variations over an extended time in situations where energy availability and use demands don’t match up well. These characterizations are often site-specific, further complicating the use of marginal sources of waste energy.
Similarly, the load design requires careful budgeting of both energy and power to ensure neither exceeds supply. Also, as a general rule, there needs to be a way of storing energy in applications where the energy source availability doesn’t match up with the electrical loads.
The distinction between harvesting and scavenging was originally made in the context of low-power applications. But as the use of recovered waste energy has risen, the concepts have become useful at significantly higher power levels. Utilities, for example, now use the energy harvesting concepts in the context of their research into energy-storage options for their solar and wind-power resources.
At the utility level, storage is an important component of an overall strategy to maximize the use of renewables for load leveling. Without this capability, captured energy through renewables merely allows the utility to temporarily throttle core generating resources, but it doesn’t change the required core capacity.
The view of scavengers changes somewhat, however, with finer-grained distributed generating resources that live on the building side of a utility meter. Here there are two primary cases: In one, exemplified by solar-equipped schools and office buildings, the availability of scavenged energy coincides reasonably well with typical structure-use models, which are biased toward daylight hours. Here, the local generating capability deducts from the cost of grid-delivered energy.
In the other case, typical use models do not align with scavenging opportunities. In this case, the resource owner can push energy back onto the grid. This reduces the monthly net cost of grid-delivered energy but leaves the utility to contend with energy from sources it had no hand in planning or deploying and does not control. That said, as premises-based generation grows, so do opportunities for utility/rate-payer cooperation that benefit both.
Leave a Reply