Google AI Weblog: The Balloon Studying Atmosphere


Benchmark challenges have been a driving power within the development of machine studying (ML). Particularly, tough benchmark environments for reinforcement studying (RL) have been essential for the speedy progress of the sector by difficult researchers to beat more and more tough duties. The Arcade Studying Atmosphere, Mujoco, and others have been used to push the envelope in RL algorithms, illustration studying, exploration, and extra.

In “Autonomous Navigation of Stratospheric Balloons Utilizing Reinforcement Studying”, revealed in Nature two years in the past, we demonstrated how deep RL can be utilized to create a high-performing flight agent that may management stratospheric balloons in the actual world. This analysis confirmed that deep RL might be efficiently utilized outdoors of simulated environments, and contributed sensible information for integrating RL algorithms with complicated dynamical programs.

At the moment we’re excited to announce the open-source launch of the Balloon Studying Atmosphere (BLE), a brand new benchmark emulating the real-world drawback of controlling stratospheric balloons. The BLE is a high-fidelity simulator, which we hope will present researchers with a helpful useful resource for deep RL analysis.

Station-Holding Stratospheric Balloons

Stratospheric balloons are full of a buoyant gasoline that permits them to drift for weeks or months at a time within the stratosphere, about twice as excessive as a passenger airplane’s cruising altitude. Although there are a lot of potential variations of stratospheric balloons, the type emulated within the BLE are geared up with photo voltaic panels and batteries, which permit them to regulate their altitude by controlling the load of air of their ballast utilizing an electrical pump. Nevertheless, they don’t have any means to propel themselves laterally, which signifies that they’re topic to wind patterns within the air round them.

By altering its altitude, a stratospheric balloon can surf winds shifting in numerous instructions.

The objective of an agent within the BLE is to station-keep — i.e., to regulate a balloon to remain inside 50km of a hard and fast floor station — by altering its altitude to catch winds that it finds favorable. We measure how profitable an agent is at station-keeping by measuring the fraction of time the balloon is inside the specified radius, denoted TWR50 (i.e., the time inside a radius of 50km).

A station-seeking balloon should navigate a altering wind area to remain above a floor station. Left: Facet elevation of a station-keeping balloon. Proper: Birds-eye-view of the identical balloon.

The Challenges of Station-Holding

To create a practical simulator (with out together with copious quantities of historic wind information), the BLE makes use of a variational autoencoder (VAE) educated on historic information to generate wind forecasts that match the traits of actual winds. A wind noise mannequin is then used to make the windfields extra lifelike to match what a balloon would encounter in real-world circumstances.

Navigating a stratospheric balloon by means of a wind area might be fairly difficult. The winds at any given altitude hardly ever stay perfect for lengthy, and a very good balloon controller might want to transfer up and down by means of its wind column to find extra appropriate winds. In RL parlance, the issue of station-keeping is partially observable as a result of the agent solely has entry to forecasted wind information to make these selections. An agent has entry to wind forecasts at each altitude and the true wind at its present altitude. The BLE returns an commentary which features a notion of wind uncertainty.

A stratospheric balloon should discover winds at totally different altitudes with a view to discover favorable winds. The commentary returned by the BLE contains wind predictions and a measure of uncertainty, made by mixing a wind forecast and winds measured on the balloon’s altitude.

In some conditions, there will not be appropriate winds wherever within the balloon’s wind column. On this case, an skilled agent remains to be in a position to fly in the direction of the station by taking a extra circuitous route by means of the wind area (a standard instance is when the balloon strikes in a zig-zag style, akin to tacking on a sailboat). Beneath we reveal that even simply remaining in vary of the station normally requires important acrobatics.

An agent should deal with lengthy planning horizons to reach station-keeping. On this case, StationSeeker (an expert-designed controller) heads on to the middle of the station-keeping space and is pushed out, whereas Perciatelli44 (an RL agent) is ready to plan forward and keep in vary longer by hugging the sting of the world.

Night time-time provides a contemporary factor of issue to station-keeping within the BLE, which displays the truth of night-time modifications in bodily circumstances and energy availability. Whereas in the course of the day the air pump is powered by photo voltaic panels, at night time the balloon depends on its on-board batteries for power. Utilizing an excessive amount of energy early within the night time sometimes ends in restricted maneuverability within the hours previous daybreak. That is the place RL brokers can uncover fairly inventive options — comparable to decreasing altitude within the afternoon with a view to retailer potential power.

An agent must steadiness the station-keeping goal with a finite power allowance at night time.

Regardless of all these challenges, our analysis demonstrates that brokers educated with reinforcement studying can be taught to carry out higher than expert-designed controllers at station-keeping. Together with the BLE, we’re releasing the primary brokers from our analysis: Perciatelli44 (an RL agent) and StationSeeker (an expert-designed controller). The BLE can be utilized with any reinforcement studying library, and to showcase this we embrace Dopamine’s DQN and QR-DQN brokers, in addition to Acme’s QR-DQN agent (supporting each standalone and distributed coaching with Launchpad).

Analysis efficiency by the included benchmark brokers on the BLE. “Finetuned” is a fine-tuned Perciatelli44 agent, and Acme is a QR-DQN agent educated with the Acme library.

The BLE supply code incorporates info on learn how to get began with the BLE, together with coaching and evaluating brokers, documentation on the varied parts of the simulator, and instance code. It additionally contains the historic windfield information (as a TensorFlow DataSet) used to coach the VAE to permit researchers to experiment with their very own fashions for windfield era. We’re excited to see the progress that the group will make on this benchmark.


We want to thank the Balloon Studying Atmosphere crew: Sal Candido, Marc G. Bellemare, Vincent Dumoulin, Ross Goroshin, and Sam Ponda. We’d additionally prefer to thank Tom Small for his wonderful animation on this weblog publish and graphic design assist, together with our colleagues, Bradley Rhodes, Daniel Eisenberg, Piotr Staczyk, Anton Raichuk, Nikola Momchev, Geoff Hinton, Hugo Larochelle, and the remainder of the Mind crew in Montreal.


Please enter your comment!
Please enter your name here