- Trained algorithms can sort through applications for oil and gas permits to extract information and themes, catching problems before they occur.
- AI would bring a new level of transparency to today’s outdated and overwhelmed energy development reviews.
This spring, regulators in Wyoming were scrambling to figure out how to process 10,000 applications for oil and gas permits filed since oil prices began to rise in early 2016. At the same time, new permits were streaming in at an unprecedented rate, raising concerns that some proposed energy projects would not get the scrutiny – or public input – they warranted.
Besides making life easier for regulators, machine learning could help watchdog groups and local residents better understand what risks new energy projects pose to the environment and public health.
A new AI application
I decided to run the idea by some acquaintances in the Bay Area tech community, one of whom had recently used Natural Language Processing to sort through thousands of public comments on the controversial XL Pipeline project.
What if we can use trained algorithms to also process oil and gas filings and extract information and themes that might otherwise fall through the cracks, I asked them.
It could turbo-charge our analysis of applications filed under the National Environmental Policy Act and help us catch impacts before they happen. Other kinds of development proposals could benefit, too, were the application to catch on.
Two people – a programmer and graduate student at Stanford University, and the XL Pipeline researcher, also at Stanford – were immediately excited about the idea and agreed to help me get started.
Scrub government websites? The project takes shape
Environmental Defense Fund laid the groundwork for the AI project earlier this year, planning to use a trained computer program to assess future impacts on greater sage-grouse habitat in the western United States.
It quickly became clear that using artificial intelligence to analyze energy development projects could bring a new level of transparency to today’s outdated and overwhelmed permitting process. By combining software and algorithms trained to “scrub” data from government websites, critical and potentially overlooked filings can be located, downloaded and processed in a matter of hours.
It would also solve another challenge: a lack of standard formats for permit applications that make such documents hard to identify and flag potential problems with NEPA filings that might otherwise be missed.
Future impacts of projects – such as pollution of a groundwater basin or habitat loss for an imperiled animal – could be identified and compiled on a spreadsheet, registry or an online map for everyone to see.
For that, you also need resources.
Grueling work, big gains
Getting artificial intelligence treatment of NEPA filings off the ground is a question of manpower, plain and simple. That also explains why progress with our initial greater sage-grouse project has been slow.
It requires weeks of data entry to train algorithms, on top of high-level math and data processing skills – neither of which your typical watchdog group or local government office possesses. But if we stop and consider what we’d gain – a grasp of what’s really happening in America’s rapidly developing rural areas – we may see just how much such advances could be worth.
So I wasn’t surprised to get a call the other week from an entrepreneur on the East Coast who had heard about our project. He proposed a different approach to the problem, which would feed in reams of permit applications and then extract information we care about based on some simple criteria.
Exploring these methodologies to pull from deep wells of data could circumvent the time-consuming work of training initial algorithms and save an immense amount of time and resources. We think we’re onto something potentially big.
Get innovation updates
We’ll send regular updates about developments in technology, science and the environment.
Thanks for subscribing to Innovation and the Environment.