As developing nations around the world become more industrialized, the thirst for oil is growing. Some experts forecast that the global demand for oil will exceed 100 million barrels per day by next year.
Meeting this demand is a major challenge for oil companies, especially with many of the most readily available reserves of oil already tapped. Exploration for new sources of oil is expensive and difficult. As they strive to meet this challenge, many energy companies are using high-performance computing solutions to give them an advantage.
These companies employ HPC solutions to crunch complicated sets of data, such as seismic surveys and 3D imaging, which must be processed with sophisticated algorithms to analyze and visualize the data. The machines must be able to manage petabytes (quadrillions of bytes) of data, and the demands on them are only growing, as energy companies strive to make better decisions in exploration more rapidly.
“Faster data processing and analysis lead to insights that can be leveraged to make real-time decisions with billion-dollar implications,” writes Tom Tyler of HPE. “And those companies making shrewd investments in HPC are enjoying a leg up on the competition, while gaining significant returns across the enterprise.”
HPCs Help Energy Firms Discover New Reserves
The results that energy companies have seen from their use of HPC for exploration have been remarkable. In 2017, BP discovered oil reserves estimated at more than 200 million barrels at a drilling field in the Gulf of Mexico. The oil has been hidden for years by a salt dome, which can distort seismic images and make data more difficult to analyze. Drilling for oil is a costly proposition under any condition but even more challenging in the ocean, and mistakes can be extremely expensive for energy companies. However, the greater power of the BP system helped it more confidently determine that it could profitably drill in the location.
“This new system, with 2,700 Apollo 6000 nodes, is equipped with Intel Knights Landing chips, a 100GB low-latency network for processing speed from 4 to 9 petaflops, 1,140TB of active memory, and 30PB of storage. This extreme power enables BP to quickly analyze petabytes of seismic data, which recently resulted in the identification of 200 million barrels of potential oil reserves nearly 7 miles below the ocean floor.”
Similarly, the multinational energy company Eni employs the most powerful supercomputer in commercial use to simulate 15 years of oil reservoir production. The supercomputer, known as HPC4, uses 3,200 Nvidia Tesla graphics processing units and can perform 18.6 quadrillion floating point operations per second. (By comparison, the most powerful supercomputer in the world, the U.S. Energy Department’s Summit, can perform 200 petaflops.) HPC4 processed 100,000 reservoir models in about 15.5 hours, a task that would have taken older systems 10 days to complete, experts say.
Advanced Use Cases for the Energy Industry
Energy companies employ high-performance computing in a variety of ways. These include:
- Using remote 3D visualization to locate valuable oil reserves
- Reducing the time needed for analysis of exploratory data (by shrinking the time needed to retrieve oil from new sources, energy companies can speed their time to profit)
- Improving reclamation projects for existing wells by employing deep learning solutions on geologic data
- Monitoring data from oil pipelines and analyzing it in real time to detect and report problems
Companies that employ these capabilities give themselves an advantage over their competitors and move closer to meeting the increasing demand for new sources of energy.