PUDL Release Notes

v2024.X.X (2024-XX-XX)

v2024.5.0 (2024-05-24)

We’ve just completed our quarterly integration of EIA data sources for 2024Q2 (in support of RMI’s Utility Transition Hub) and have also added a bunch of new tables over the last few months in an effort to better support energy system modelers (with support from GridLab). Details below.

New Data Coverage

EIA-860 & EIA-923

GridPath RA Toolkit



  • Added new NREL ATB tables with annual technology cost and performance projections. See issue #3465 and PRs #3498, #3570.



EIA Bulk Electricity Data

  • Updated the EIA Bulk Electricity data archive to include data that was available as of 2024-05-01, which covers up through 2024-02-01 (3 months more than the previously used archive). See PR #3615.

FERC Form 1

Data Cleaning

  • When generator_operating_date values are too inconsistent to be harvested successfully, we now take the max date within a year and attempt to harvest again, to rescue records lost because of inconsistent month reporting in EIA 860 and 860M. See #3340 and PR #3419. This change also fixed a bug that was preventing other columns harvested with a special process from being saved.

  • When ingesting FERC 1 XBRL filings, we now take the most recent non-null value instead of the value from the latest filing that applies for a specific row. This means that we no longer lose data if a utility posts a FERC filing with only a small number of updated values.

EIA - FERC1 Record Linkage Model Update

We merged in a refactor of the EIA plant parts to FERC1 plants record linkage model, which was generously supported by a CCAI Innovation Grant. This replaced the linear regression model with a model built with the Python package Splink. Splink provides helpful visualizations to understand model performance and parameter tuning, which can be generated with devtools/splink-ferc1-eia-match.ipynb. We measured model performance with precision - a measure of accuracy when the model makes a prediction, recall - a measure of coverage of FERC records model predicted a match for, and accuracy - a measure of overall correctness of the predictions. Model performance improved and now has a precision of .94, recall of .9, and overall accuracy of .85.

Schema Changes

Bug Fixes

  • Ensure that all columns fed into the harvesting / reconciliation process are encoded before harvesting takes place, improving the consistency of harvested fields. See issue #3542 and PR #3558. This change also simplifies the encoding process in the vast majority of cases, since the same global set of encoders can be used on any dataframe, with every column encoded based on the field definitions and FK constraints associated with the column name.

CLI Changes

  • Removed the --clobber option from the ferc_to_sqlite command and associated assets. We rebuild these databases infrequently, and needing to either edit the runtime parameters in Dagster’s Launchpad or remove the existing databases from the filesystem manually are brittle. Partly in response to issue #3612; see PR #3622.

v2024.2.6 (2024-02-25)

The main impetus behind this release is the quarterly update of some of our core datasets with preliminary data for 2023Q4. The EIA Form 860 – Annual Electric Generator Report, EPA Hourly Continuous Emission Monitoring System (CEMS), and bulk EIA API data are all up to date through the end of 2023, while the EIA Form 923 – Power Plant Operations Report lags a month behind and is currently only available through November, 2023. We also addressed several issues we found in our initial release automation process that will make it easier for us to do more frequent releases, like this one!

We’re also for the first time publishing the full historical time series of of generator data available in the EIA860M, rather than just using the most recent release to update the EIA860 outputs. This enables tracking of how planned fossil plant retirement dates have evolved over time.

There are also updates to our data validation system, a new version of Pandas, and experimental Parquet outputs. See below for the details.

New Data Coverage

  • Add EIA860M data through December 2023 #3313, #3367.

  • Add 2023 Q4 of CEMS data. See #3315, #3379.

  • Add EIA923 monthly data through November 2023 #3314, #3398, #3422.

  • Create a new table core_eia860m__changelog_generators which tracks the evolution of all generator data reported in the EIA860M, in particular the stated retirement dates. see issue #3330 and PR #3331. Previously only the most recent month of reported EIA860M data was available within the PUDL DB.

Release Infrastructure

  • Use the same logic to merge version tags into the stable branch as we are using to merge the nightly build tags into the nightly branch. See PR #3347

  • Automatically place a temporary object hold on all versioned data releases that we publish to GCS, to ensure that they can’t be accidentally deleted. See issue #3400 and PR #3421.

Schema Changes

Data Validation with Pandera

We’ve started integrating pandera dataframe schemas and checks with dagster asset checks to validate data while our ETL pipeline is running instead of only after all the data has been produced. Initially we are using the various database schema checks that are generated by our metadata, but the goal is to migrate all of our data validation tests into this framework over time, and to start using it to encode any new data validations immediately. See issues #941, #1572, #3318, #3412 and PR #3282.

Pandas 2.2

We’ve updated to Pandas 2.2, which has a number of changes and deprecations. See PRs #3272, #3410.

  • Changes in how merge results are sorted impacted the assignment of unit_id_pudl values, so any hard-coded values that dependent on the previous assignments will likely be incorrect now. We had to update a number of tests and FERC1-EIA record linkage training data to account for this change.

  • Pandas is also deprecating the use of the AS frequency alias, in favor of YS, so many references to the old alias have been updated.

  • We’ve switched to using the calamine engine for reading Excel files, which is much faster than the old openpyxl library.

Parquet Outputs

The ETL now outputs PyArrow Parquet files for all tables that are written to the PUDL DB. The Parquet outputs are used as the interim storage for the ETL, rather than reading all tables out of the SQLite DB. We aren’t publicly distributing the Parquet outputs yet, but are giving them a test run with some existing users. See #3102 #3296, #3399.


  • Update PUDL to use Python 3.12. See issue #3327 and PR #3413.


This release contains only minor data updates compared to what we put out in December, however the database naming conventions and release process has changed pretty dramatically. We are confident these changes will make the data we publish more accessible, and allow us to push out updates much more frequently going forward.

We also finally merged in improvements and generalizations to our record linkage processes, which were generously supported by a CCAI Innovation Grant. Connecting disparate public datasets that describe the same physical infrastructure and corporate entities is one of the most valuable improvements we make to the data, and we are excited to be able to be able to do it in a more general, reproducible way so we can easily apply it to other datasets. We’ve already started work on a Mozilla Foundation grant to link SEC data to the FERC and EIA data we already have, allowing us to track ownership relationships between utility holding companies and their many subsidiaries. We expect the same kind of process will be useful for linking the PHMSA gas pipeline data to natural gas utilities that report to EIA and FERC.

Database Naming Conventions

Our main focus with this release was to overhaul the naming system for our nearly 200 database tables. This will hopefully make it easier to find what you’re looking for, especially if you are a new PUDL user. We think it will also make it easier for us to keep the database organized as we continue to expand its scope. For an explanation of the new naming conventions, see Naming Conventions, and to see the full list of all available tables, see the PUDL Data Dictionary.

This is a major breaking change for anybody is accessing the database directly. Stick with the v2023.12.01 release until you’re ready to update your references to the old database table names. For the time being we have patched the old pudl.output.pudltabl.PudlTabl class so that it behaves as similarly as possible to before. However, we plan to remove this output class in the near future, and no new database tables will be made accessible through it. Going forward we expect users to use the database directly, freeing them from the need to install all of the software and dependencies which we use to produce it, hopefully improving the data’s technical accessibility and platform independence.

For more development details see #2765 which was the main epic tracking this process (with many sub-issues: #2777, #2788, #2812, #2868, #2992, #3030, #3173, #3174, #3223) and PR #2818.

Changes to CLI Tools

  • The epacems_to_parquet and state_demand scripts have been retired in favor of using the Dagster UI. See #3107 and #3086. Visualizations of hourly state-level electricity demand have been moved into our example notebooks which can be found both on Kaggle and on GitHub

  • The pudl_setup script has been retired. All input/output locations are now set using the $PUDL_INPUT and $PUDL_OUTPUT environment variables. See #3107 and #3086.

  • The pudl.analysis.service_territory.pudl_service_territories() script has been fixed, and can be used to generate GeoParquet outputs describing historical utility and balancing authority service territories. See #1174 and #3086.

Development Infrastructure

  • Automate the process of doing software and data releases when a new version tag is pushed to facilitate continuous deployment. See #3127, #3158

  • To make development more convenient given our long-running integration tests, the PUDL repository now uses a merge queue.

  • Switch to using Google Batch for our data builds. See #3211.

  • Deprecated the dev branch and updated our nightly builds and GitHub workflow to use three persistent branches: main for bleeding edge changes, nightly for the most recent commit to have a successful nightly build output, and stable for the most recently released version of PUDL. The nightly and stable branches are protected and automatically updated. Build outputs are now written to gs://builds.catalyst.coop and retained for 30 days. See issues #3140, #3179 and PRs #3195, #3206, #3212, #3188, #3164

Record Linkage Improvements

New Data Coverage

  • Updated EPA Hourly Continuous Emission Monitoring System (CEMS) to switch to pulling the quarterly updates of CEMS instead of the annual files. Integrates CEMS through 2023Q3. See issue #2973 & PR #3096, #3139.

  • Began integration of PHMSA gas distribution and transmission tables into PUDL, extracting raw data from 1990-present. Note that these tables are not yet being written to the database as they are still raw. See epic #2848, and constituent PRs: #2932, #3242, #3254, #3260, #3262, #3266, #3267, #3269, #3270, #3279, #3280.

  • We began integration of data from EIA Forms 176, 191, and 757, describing natural gas sources, storage, transporation, and disposition. Note this data is still in its raw extracted form and is not yet being written to the PUDL DB. See #3304, #3227

  • Updated the EIA Bulk Electricity data archive so that the available data now to runs through 2023-10-01. See #3252. Also added this dataset to the set of data that will automatically generate archives each month. See This PUDL Archiver PR and this Zenodo archive

Data Cleaning

Metadata Cleaning

  • Fix metadata structures and pyarrow schema generation process so that all tables can now be output as Parquet files. See issue #3102 and PR #3222.

  • Made a description field mandatory for all instances of Field and Resource. Updated the pudl.metadata.fields.FIELD_METADATA` and pudl.metadata.resources.RESOURCE_METADATA` so that all of them have a description. This primarily affected EIA Form 861 – Annual Electric Power Industry Report tables. See #3224, #3283.

  • Removed fields that are not used in any tables and removed the xfail from the test_defined_fields_are_used test. #3224, #3283.


Dagster Adoption

  • After comparing comparing python orchestration tools #1487, we decided to adopt Dagster. Dagster will allow us to parallize the ETL, persist datafarmes at any step in the data cleaning process, visualize data depedencies and run subsets of the ETL from upstream caches.

  • We are converting PUDL code to use dagster concepts in two phases. The first phase converts the ETL portion of the code base to use software defined assets #1570. The second phase converts the output and analysis tables in the pudl.output.pudltabl.PudlTabl class to use software defined assets, replacing the existing pudl_out output functions.

  • General changes:

    • pudl.etl is now a subpackage that collects all pudl assets into a dagster Definition.

    • The pudl_settings, Datastore and DatasetSettings are now dagster resources. See pudl.resources.

    • The pudl_etl and ferc_to_sqlite commands no longer support loading specific tables. The commands run all of the tables. Use dagster assets to run subsets of the tables.

    • The --clobber argument has been removed from the pudl_etl command.

    • New static method pudl.metadata.classes.Package.get_etl_group_tables returns the resources ids for a given etl group.

    • pudl.settings.FercToSqliteSettings class now loads all FERC datasources if no datasets are specified.

    • The Excel extractor in pudl.extract.excel has been updated to parallelize Excel spreadsheet extraction using Dagster @multi_asset functionality, thanks to @dstansby. This is currently being used for EIA 860, 861 and 923 data. See #2385 and PRs #2644, #2943.

  • EIA ETL changes:

    • The EIA table level cleaning functions are now dagster assets. The table level cleaning assets now have a “clean_” prefix and a “_{datasource}” suffix to distinguish them from the final harvested tables.

    • pudl.transform.eia.transform() is now a @multi_asset that depends on all of the EIA table level cleaning functions / assets.

  • EPA CEMS ETL changes:

    • pudl.transform.epacems.transform() now loads the epacamd_eia and plants_entity_eia tables as dataframes using the pudl.io_manager.pudl_sqlite_io_manager instead of reading the tables using a pudl_engine.

    • Adds a Ohio plant that is in 2021 CEMS but missing from EIA since 2018 to the additional_epacems_plants.csv sheet.

  • FERC ETL changes:

    • pudl.extract.ferc1.dbf2sqlite() and pudl.extract.xbrl.xbrl2sqlite() are now configurable dagster ops. These ops make up the ferc_to_sqlite dagster graph in pudl.ferc_to_sqlite.defs.

    • FERC 714 extraction methods are now subsettable by year, with 2019 and 2020 data included in the etl_fast.yml by default. See #2628 and PR #2649.

  • Census DP1 ETL changes:

New Asset Naming Convention

There are hundreds of new tables in pudl.sqlite now that the methods in PudlTabl have been converted to Dagster assets. This significant increase in tables and diversity of table types prompted us to create a new naming convention to make the table names more descriptive and organized. You can read about the new naming convention in the docs.

To help users migrate away from using PudlTabl and our temporary table names, we’ve created a google sheet that maps the old table names and PudlTabl methods to the new table names.

We’ve added deprecation warnings to the PudlTabl class. We plan to remove PudlTabl from the pudl package once our known users have succesfully migrated to pulling data directly from pudl.sqlite.

Data Coverage

Data Cleaning



  • Replace references to deprecated pudl-scrapers and pudl-zenodo-datastore repositories with references to pudl-archiver repository in Working with the Datastore, and Existing Data Updates. See #2190.

  • pudl.etl is now a subpackage that collects all pudl assets into a dagster Definition. All pudl.etl._etl_{datasource} functions have been deprecated. The coordination of ETL steps is being handled by dagster.

  • The pudl.load module has been removed in favor of using the pudl.io_managers.pudl_sqlite_io_manager.

  • The pudl_etl and ferc_to_sqlite commands no longer support loading specific tables. The commands run all of the tables. Use dagster assets to run subsets of the tables.

  • The --clobber argument has been removed from the pudl_etl command.

  • pudl.transform.eia860.transform() and pudl.transform.eia923.transform() functions have been deprecated. The table level EIA cleaning funtions are now coordinated using dagster.

  • pudl.transform.ferc1.transform() has been removed. The ferc1 table

    transformations are now being orchestrated with Dagster.

  • pudl.transform.ferc1.transform can no longer be executed as a script. Use dagster-webserver to execute just the FERC Form 1 pipeline.

  • pudl.extract.ferc1.extract_dbf, pudl.extract.ferc1.extract_xbrl pudl.extract.ferc1.extract_xbrl_single, pudl.extract.ferc1.extract_dbf_single, pudl.extract.ferc1.extract_xbrl_generic, pudl.extract.ferc1.extract_dbf_generic have all been deprecated. The extraction logic is now covered by the pudl.io_managers.ferc1_xbrl_sqlite_io_manager and pudl.io_managers.ferc1_dbf_sqlite_io_manager IO Managers.

  • pudl.extract.ferc1.extract_xbrl_metadata has been replaced by the pudl.extract.ferc1.xbrl_metadata_json() asset.

  • All sub classes of pudl.settings.GenericDatasetSettings() in pudl.settings no longer have table attributes because the ETL no longer supports loading specific tables via settings. Use dagster to select subsets of tables to process.


  • Updated PUDL to use Python 3.11. See #2408 & #2383

  • Apply start and end dates to ferc1 data in pudl.output.pudltabl.PudlTabl. See #2238 & #274.

  • Add generic spot fix method to transform process, to manually rescue FERC1 records. See #2254 & #1980.

  • Reverted a fix made in #1909, which mapped all plants located in NY state that reported a balancing authority code of “ISONE” to “NYISO”. These plants now retain their original EIA codes. Plants with manual re-mapping of BA codes have also been fixed to have correctly updated BA names. See #2312 and #2255.

  • Fixed a column naming bug that was causing EIA860 monthly retirement dates to get nulled out. See #2834 and #2835

  • Switched to using conda-lock and Makefile to manage testing and python environment. Moved away from packaging PUDL for distribution via PyPI and conda-forge and toward treating it as an application. See #2968

  • The two-point-ohening: We now require Pandas v2 (see #2320), SQLAlchemy v2 (see #2267) and Pydantic v2 (see #3051).

  • Update the names of our FERC SQLite DBs to indicate what source data they come from. See issue #3079 and` #3094.


Data Coverage

Data Analysis

  • Instead of relying on the EIA API to fill in redacted fuel prices with aggregate values for individual states and plants, use the archived eia_bulk_elec data. This means we no longer have any reliance on the API, which should make the fuel price filling faster and more reliable. Coverage is still only about 90%. See #1764 and #1998. Additional filling with aggregate and/or imputed values is still on the workplan. You can follow the progress in #1708.

Nightly Data Builds

  • We added infrastructure to run the entire ETL and all tests nightly so we can catch data errors when they are merged into dev. This allows us to automatically update the PUDL Intake data catalogs when there are new code releases. See #1177 for more details.

  • Created a docker image that installs PUDL and it’s depedencies. The build-deploy-pudl.yaml GitHub Action builds and pushes the image to Docker Hub and deploys the image on a Google Compute Engine instance. The ETL outputs are then loaded to Google Cloud buckets for the data catalogs to access.

  • Added GoogleCloudStorageCache support to ferc1_to_sqlite and censusdp1tract_to_sqlite commands and pytest.

  • Allow users to create monolithic and partitioned EPA CEMS outputs without having to clobber or move any existing CEMS outputs.

  • GoogleCloudStorageCache now supports accessing requester pays buckets.

  • Added a --loglevel arg to the package entrypoint commands.

Database Schema Changes

  • After learning that generators’ prime movers do very occasionally change over time, we recategorized the prime_mover_code column in our entity resolution process to enable the rare but real variability over time. We moved the prime_mover_code column from the statically harvested/normalized data column to an annually harvested data column (i.e. from generators_entity_eia to generators_eia860) #1600. See #1585 for more details.

  • Created operational_status_eia into our static metadata tables (See PUDL Code Metadata). Used these standard codes and code fixes to clean operational_status_code in the generators_entity_eia table. #1624

  • Moved a number of slowly changing plant attributes from the plants_entity_eia table to the annual plants_eia860 table. See #1748 and #1749. This was initially inspired by the desire to more accurately reproduce the aggregated fuel prices which are available in the EIA’s API. Along with state, census region, month, year, and fuel type, those prices are broken down by industrial sector. Previously sector_id_eia (an aggregation of several primary_purpose_naics_id values) had been assumed to be static over a plant’s lifetime, when in fact it can change if e.g. a plant is sold to an IPP by a regulated utility. Other plant attributes which are now allowed to vary annually include:

    • balancing_authority_code_eia

    • balancing_authority_name_eia

    • ferc_cogen_status

    • ferc_exempt_wholesale_generator

    • ferc_small_power_producer

    • grid_voltage_1_kv

    • grid_voltage_2_kv

    • grid_voltage_3_kv

    • iso_rto_code

    • primary_purpose_id_naics

  • Renamed grid_voltage_kv to grid_voltage_1_kv in the plants_eia860 table, to follow the pattern of many other multiply reported values.

  • Added a balancing_authorities_eia coding table mapping BA codes found in the EIA Form 860 – Annual Electric Generator Report and EIA Form 923 – Power Plant Operations Report to their names, cleaning up non-standard codes, and fixing some reporting errors for PACW vs. PACE (PacifiCorp West vs. East) based on the state associated with the plant reporting the code. Also added backfilling for codes in years before 2013 when BA Codes first started being reported, but only in the output tables. See: #1906, #1911

  • Renamed and removed some columns in the EPA Hourly Continuous Emission Monitoring System (CEMS) dataset. unitid was changed to emissions_unit_id_epa to clarify the type of unit it represents. unit_id_epa was removed because it is a unique identifyer for emissions_unit_id_epa and not otherwise useful or transferable to other datasets. facility_id was removed because it is specific to EPA’s internal database and does not aid in connection with other data. #1692

  • Added a new table political_subdivisions which consolidated various bits of information about states, territories, provinces etc. that had previously been scattered across constants stored in the codebase. The ownership_eia860 table had a mix of state and country information stored in the same column, and to retain all of it we added a new owner_country_code column. #1966

Data Accuracy

Helper Function Updates

  • Replaced the PUDL helper function clean_merge_asof that merged two dataframes reported on different temporal granularities, for example monthly vs yearly data. The reworked function, pudl.helpers.date_merge, is more encapsulating and faster and replaces clean_merge_asof in the MCOE table and EIA 923 tables. See #1103, #1550

  • The helper function pudl.helpers.expand_timeseries was also added, which expands a dataframe to include a full timeseries of data at a certain frequency. The coordinating function pudl.helpers.full_timeseries_date_merge first calls pudl.helpers.date_merge to merge two dataframes of different temporal granularities, and then calls pudl.helpers.expand_timeseries to expand the merged dataframe to a full timeseries. The added timeseries_fillin argument, makes this function optionally used to generate the MCOE table that includes a full monthly timeseries even in years when annually reported generators don’t have matching monthly data. See #1550

  • Updated the fix_leading_zero_gen_ids fuction by changing the name to remove_leading_zeros_from_numeric_strings because it’s used to fix more than just the generator_id column. Included a new argument to specify which column you’d like to fix.

Plant Parts List Module Changes

  • We refactored a couple components of the Plant Parts List module in preparation for the next round of entity matching of EIA and FERC Form 1 records with the Panda model developed by the Chu Data Lab at Georgia Tech, through work funded by a CCAI Innovation Grant. The labeling of different aggregations of EIA generators as the true granularity was sped up, resulting in faster generation of the final plant parts list. In addition, the generation of the installation_year column in the plant parts list was fixed and a construction_year column was also added. Finally, operating_year was added as a level that the EIA generators are now aggregated to.

  • The mega generators table and in turn the plant parts list requires the MCOE table to generate. The MCOE table is now created with the new pudl.helpers.date_merge helper function (described above). As a result, now by default only columns from the EIA 860 generators table that are necessary for the creation of the plant parts list will be included in the MCOE table. This list of columns is defined by the global pudl.analysis.mcoe.DEFAULT_GENS_COLS. If additional columns that are not part of the default list are needed from the EIA 860 generators table, these columns can be passed in with the gens_cols argument. See #1550

  • For memory efficiency, appropriate columns are now cast to string and categorical types when the full plant parts list is created. The resource and field metadata is now included in the PUDL metadata. See #1865

  • For clarity and specificity, the plant_name_new column was renamed plant_name_ppe and the ownership column was renamed ownership_record_type. See #1865

  • The PLANT_PARTS_ORDERED list was removed and PLANT_PARTS is now an OrderedDict that establishes the plant parts hierarchy in its keys. All references to PLANT_PARTS_ORDERED were replaced with the PLANT_PARTS keys. See #1865


  • Used the data source metadata class added in release 0.6.0 to dynamically generate the data source documentation (See Data Sources). #1532

  • The EIA plant parts list was added to the resource and field metadata. This is the first output table to be included in the metadata. See #1865


  • Fixed broken links in the documentation since the Air Markets Program Data (AMPD) changed to Clean Air Markets Data (CAMD).

  • Added graphics and clearer descriptions of EPA data and reporting requirements to the EPA Hourly Continuous Emission Monitoring System (CEMS) page. Also included information about the epacamd_eia crosswalk.

Bug Fixes

  • Dask v2022.4.2 introduced breaking changes into dask.dataframe.read_parquet(). However, we didn’t catch this when it happened because it’s only a problem when there’s more than one row-group. Now we’re processing 2019-2020 data for both ID and ME (two of the smallest states) in the tests. Also restricted the allowed Dask versions in our setup.py so that we get notified by the dependabot any time even a minor update. happens to any of the packages we depend on that use calendar versioning. See #1618.

  • Fixed a testing bug where the partitioned EPA CEMS outputs generated using parallel processing were getting output in the same output directory as the real ETL, which should never happen. See #1618.

  • Changed the way fixes to the EIA-861 balancing authority names and IDs are applied, so that they still work when only some years of data are being processed. See #1671 and #828.

Dependencies / Environment

  • In conjunction with getting the @dependabot set up to merge its own PRs if CI passes, we tightened the version constraints on a lot of our dependencies. This should reduce the frequency with which we get surprised by changes breaking things after release. See #1655

  • We’ve switched to using mambaforge to manage our environments internally, and are recommending that users use it as well.

  • We’re moving toward treating PUDL like an application rather than a library, and part of that is no longer trying to be compatible with a wide range of versions of our dependencies, instead focusing on a single reproducible environment that is associated with each release, using lockfiles, etc. See #1669

  • As an “application” PUDL is now only supporting the most recent major version of Python (curently 3.10). We used pyupgrade and pep585-upgrade to update the syntax of to use Python 3.10 norms, and are now using those packages as pre-commit hooks as well. See #1685

0.6.0 (2022-03-11)

Data Coverage

New Analyses

  • For the purposes of linking EIA and FERC Form 1 records, we (mostly @cmgosnell) have created a new output called the Plant Parts List in pudl.analysis.plant_parts_eia which combines many different sub-parts of the EIA generators based on their fuel type, prime movers, ownership, etc. This allows a huge range of hypothiecally possible FERC Form 1 plant records to be synthesized, so that we can identify exactly what data in EIA should be associated with what data in FERC using a variety of record linkage & entity matching techniques. This is still a work in progress, both with our partners at RMI, and in collaboration with the Chu Data Lab at Georgia Tech, through work funded by a CCAI Innovation Grant. #1157


  • Column data types for our database and Apache Parquet outputs, as well as pandas dataframes are all based on the same underlying schemas, and should be much more consistent. #1370, #1377, #1408

  • Defined a data source metadata class pudl.metadata.classes.DataSource using Pydantic to store information and procedures specific to each data source (e.g. FERC Form 1 – Annual Report of Major Electric Utilities, EIA Form 923 – Power Plant Operations Report). #1446

  • Use the data source metadata classes to automatically export rich metadata for use with our Datasette deployement. #1479

  • Use the data source metadata classes to store rich metadata for use with our Zenodo raw data archives so that information is no longer duplicated and liable to get out of sync. #1475

  • Added static tables and metadata structures that store definitions and additional information related to the many coded categorical columns in the database. These tables are exported directly into the documentation (See PUDL Code Metadata). The metadata structures also document all of the non-standard values that we’ve identified in the raw data, and the standard codes that they are mapped to. #1388

  • As a result of all these metadata improvements we were finally able to close #52 and delete the pudl.constants junk-drawer module… after 5 years.

Data Cleaning

  • Fixed a few inaccurately hand-mapped PUDL Plant & Utility IDs. #1458, #1480

  • We are now using the coding table metadata mentioned above and the foreign key relationships that are part of the database schema to automatically recode any column that refers to the codes defined in the coding table. This results in much more uniformity across the whole database, especially in the EIA energy_source_code columns. #1416

  • In the raw input data, often NULL values will be represented by the empty string or other not really NULL values. We went through and cleaned these up in all of the categorical / coded columns so that their values can be validated based on either an ENUM constraint in the database, or a foreign key constraint linking them to the static coding tables. Now they should primarily use the pandas NA value, or numpy.nan in the case of floats. #1376

  • Many FIPS and ZIP codes that appear in the raw data are stored as integers rather than strings, meaning that they lose their leading zeros, rendering them invalid in many contexts. We use the same method to clean them all up now, and enforce a uniform field width with leading zero padding. This also allows us to enforce a regex pattern constraint on these fields in the database outputs. #1405, #1476

  • We’re now able to fill in missing values in the very useful generators_eia860 technology_description field. Currently this is optionally available in the output layer, but we want to put more of this kind of data repair into the core database gong forward. #1075


  • Created a simple script that allows our SQLite DB to be loaded into Google’s CloudSQL hosted PostgreSQL service pgloader and pg_dump. #1361

  • Made better use of our Pydantic settings classes to validate and manage the ETL settings that are read in from YAML files and passed around throughout the functions that orchestrate the ETL process. #1506

  • PUDL now works with pandas 1.4 (#1421) and Python 3.10 (#1373).

  • Addressed a bunch of deprecation warnings being raised by geopandas. #1444

  • Integrated the pre-commit.ci service into our GitHub CI in order to automatically apply a variety of code formatting & checks to all commits. #1482

  • Fixed random seeds to avoid stochastic test coverage changes in the pudl.analysis.timeseries_cleaning module. #1483

  • Silenced a bunch of 3rd party module warnings in the tests. See #1476

Bug Fixes

  • In addressing #851, #1296, #1325 the generation_fuel_eia923 table was split to create a generation_fuel_nuclear_eia923 table since they have different primary keys. This meant that the pudl.output.pudltabl.PudlTabl.gf_eia923() method no longer included nuclear generation. This impacted the net generation allocation process and MCOE calculations downstream, which were expecting to have all the reported nuclear generation. This has now been fixed, and the generation fuel output includes both the nuclear and non-nuclear generation, with nuclear generation aggregated across nuclear unit IDs so that it has the same primary key as the rest of the generation fuel table. #1518

  • EIA changed the URL of their API to only accept connections over HTTPS, but we had a hard-coded HTTP URL, meaning the historical fuel price filling that uses the API broke. This has been fixed.

Known Issues

  • Everything is fiiiiiine.

0.5.0 (2021-11-11)

Data Coverage Changes

SQLite and Parquet Outputs

  • The ETL pipeline now outputs SQLite databases and Apache Parquet datasets directly, rather than generating tabular data packages. This is much faster and simpler, and also takes up less space on disk. Running the full ETL including all EPA CEMS data should now take around 2 hours if you have all the data downloaded.

  • The new pudl.load.sqlite and pudl.load.parquet modules contain this logic. The pudl.load.csv and pudl.load.metadata modules have been removed along with other remaining datapackage infrastructure. See #1211

  • Many more tables now have natural primary keys explicitly specified within the database schema.

  • The datapkg_to_sqlite script has been removed and the epacems_to_parquet script can now be used to process the original EPA CEMS CSV data directly to Parquet using an existing PUDL database to source plant timezones. See #1176, #806.

  • Data types, specified value constraints, and the uniqueness / non-null constraints on primary keys are validated during insertion into the SQLite DB.

  • The PUDL ETL CLI pudl.etl.cli now has flags to toggle various constraint checks including --ignore-foreign-key-constraints --ignore-type-constraints and --ignore-value-constraints.

New Metadata System

With the deprecation of tabular data package outputs, we’ve adopted a more modular metadata management system that uses Pydantic. This setup will allow us to easily validate the metadata schema and export to a variety of formats to support data distribution via Datasette and Intake catalogs, and automatic generation of data dictionaries and documentation. See #806, #1271, #1272 and the pudl.metadata subpackage. Many thanks to @ezwelty for most of this work.

ETL Settings File Format Changed

We are also using Pydantic to parse and validate the YAML settings files that tell PUDL what data to include in an ETL run. If you have any old settings files of your own lying around they’ll need to be updated. Examples of the new format will be deployed to your system if you re-run the pudl_setup script. Or you can make a copy of the etl_full.yml or etl_fast.yml files that are stored under src/pudl/package_data/settings and edit them to reflect your needs.

Database Schema Changes

With the direct database output and the new metadata system, it’s much eaiser for us to create foreign key relationships automatically. Updates that are in progress to the database normalization and entity resolution process also benefit from using natural primary keys when possible. As a result we’ve made some changes to the PUDL database schema, which will probably affect some users.

  • We have split out a new generation_fuel_nuclear_eia923 table from the existing generation_fuel_eia923 table, as nuclear generation and fuel consumption are reported at the generation unit level, rather than the plant level, requiring a different natural primary key. See #851, #1296, #1325.

  • Implementing a natural primary key for the boiler_fuel_eia923 table required the aggregation of a small number of records that didn’t have well-defined prime_mover_code values. See #852, #1306, #1311.

  • We repaired, aggregated, or dropped a small number of records in the generation_eia923 (See #1208, #1248) and ownership_eia860 (See #1207, #1258) tables due to null values in their primary key columns.

  • Many new foreign key constraints are being enforced between the EIA data tables, entity tables, and coding tables. See #1196.

  • Fuel types and energy sources reported to EIA are now defined in / constrained by the static energy_sources_eia table.

  • The columns that indicate the mode of transport for various fuels now contain short codes rather than longer labels, and are defined in / constrained by the static fuel_transportation_modes_eia table.

  • In the simplified FERC 1 fuel type categories, we’re now using other instead of unknown.

  • Several columns have been renamed to harmonize meanings between different tables and datasets, including:

    • In generation_fuel_eia923 and boiler_fuel_eia923 the fuel_type and fuel_type_code columns have been replaced with energy_source_code, which appears in various forms in generators_eia860 and fuel_receipts_costs_eia923.

    • fuel_qty_burned is now fuel_consumed_units

    • fuel_qty_units is now fuel_received_units

    • heat_content_mmbtu_per_unit is now fuel_mmbtu_per_unit

    • sector_name and sector_id are now sector_name_eia and sector_id_eia

    • primary_purpose_naics_id is now primary_purpose_id_naics

    • mine_type_code is now mine_type (a human readable label, not a code).

New Analyses

  • Added a deployed console script for running the state-level hourly electricity demand allocation, using FERC 714 and EIA 861 data, simply called state_demand and implemented in pudl.analysis.state_demand. This script existed in the v0.4.0 release, but was not deployed on the user’s system.

Known Issues

  • The pudl_territories script has been disabled temporarily due to a memory issue. See #1174

  • Utility and Balancing Authority service territories for 2020 have not been vetted, and may contain errors or omissions. In particular there seems to be some missing demand in ND, SD, NE, KS, and OK. See #1310

Updated Dependencies

  • SQLAlchemy 1.4.x: Addressed all deprecation warnings associated with API changes coming in SQLAlchemy 2.0, and bumped current requirement to 1.4.x

  • Pandas 1.3.x: Addressed many data type issues resulting from changes in how Pandas preserves and propagates ExtensionArray / nullable data types.

  • PyArrow v5.0.0 Updated to the most recent version

  • PyGEOS v0.10.x Updated to the most recent version

  • contextily has been removed, since we only used it optionally for making a single visualization and it has substantial dependencies itself.

  • goodtables-pandas-py has been removed since we’re no longer producing or validating datapackages.

  • SQLite 3.32.0 The type checks that we’ve implemented currently only work with SQLite version 3.32.0 or later, as we discovered in debugging build failures on PR #1228. Unfortunately Ubuntu 20.04 LTS shipped with SQLite 3.31.1. Using conda to manage your Python environment avoids this issue.

0.4.0 (2021-08-16)

This is a ridiculously large update including more than a year and a half’s worth of work.

New Data Coverage

Documentation & Data Accessibility

We’ve updated and (hopefully) clarified the documentation, and no longer expect most users to perform the data processing on their own. Instead, we are offering several methods of directly accessing already processed data:

Users who still want to run the ETL themselves will need to set up the set up the PUDL development environment

Data Cleaning & Integration

  • We now inject placeholder utilities in the cloned FERC Form 1 database when respondent IDs appear in the data tables, but not in the respondent table. This addresses a bunch of unsatisfied foreign key constraints in the original databases published by FERC.

  • We’re doing much more software testing and data validation, and so hopefully we’re catching more issues early on.

Hourly Electricity Demand and Historical Utility Territories

With support from GridLab and in collaboration with researchers at Berkeley’s Center for Environmental Public Policy, we did a bunch of work on spatially attributing hourly historical electricity demand. This work was largely done by @ezwelty and @yashkumar1803 and included:

  • Semi-programmatic compilation of historical utility and balancing authority service territory geometries based on the counties associated with utilities, and the utilities associated with balancing authorities in the EIA 861 (2001-2019). See e.g. #670 but also many others.

  • A method for spatially allocating hourly electricity demand from FERC 714 to US states based on the overlapping historical utility service territories described above. See #741

  • A fast timeseries outlier detection routine for cleaning up the FERC 714 hourly data using correlations between the time series reported by all of the different entities. See #871

Net Generation and Fuel Consumption for All Generators

We have developed an experimental methodology to produce net generation and fuel consumption for all generators. The process has known issues and is being actively developed. See #989

Net electricity generation and fuel consumption are reported in multiple ways in the EIA 923. The generation_fuel_eia923 table reports both generation and fuel consumption, and breaks them down by plant, prime mover, and fuel. In parallel, the generation_eia923 table reports generation by generator, and the boiler_fuel_eia923 table reports fuel consumption by boiler.

The generation_fuel_eia923 table is more complete, but the generation_eia923 + boiler_fuel_eia923 tables are more granular. The generation_eia923 table includes only ~55% of the total MWhs reported in the generation_fuel_eia923 table.

The pudl.analysis.allocate_gen_fuel module estimates the net electricity generation and fuel consumption attributable to individual generators based on the more expansive reporting of the data in the generation_fuel_eia923 table.

Data Management and Archiving

  • We now use a series of web scrapers to collect snapshots of the raw input data that is processed by PUDL. These original data are archived as Frictionless Data Packages on Zenodo, so that they can be accessed reproducibly and programmatically via a REST API. This addresses the problems we were having with the v0.3.x releases, in which the original data on the agency websites was liable to be modified long after its “final” release, rendering it incompatible with our software. These scrapers and the Zenodo archiving scripts can be found in our pudl-scrapers and pudl-zenodo-storage repositories. The archives themselves can be found within the Catalyst Cooperative community on Zenodo

  • There’s an experimental caching system that allows these Zenodo archives to work as long-term “cold storage” for citation and reproducibility, with cloud object storage acting as a much faster way to access the same data for day to day non-local use, implemented by @rousik

  • We’ve decided to shift to producing a combination of relational databases (SQLite files) and columnar data stores (Apache Parquet files) as the primary outputs of PUDL. Tabular Data Packages didn’t end up serving either database or spreadsheet users very well. The CSV file were often too large to access via spreadsheets, and users missed out on the relationships between data tables. Needing to separately load the data packages into SQLite and Parquet was a hassle and generated a lot of overly complicated and fragile code.

Known Issues

  • The EIA 861 and FERC 714 data are not yet integrated into the SQLite database outputs, because we need to overhaul our entity resolution process to accommodate them in the database structure. That work is ongoing, see #639

  • The EIA 860 and EIA 923 data don’t cover exactly the same rage of years. EIA 860 only goes back to 2004, while EIA 923 goes back to 2001. This is because the pre-2004 EIA 860 data is stored in the DBF file format, and we need to update our extraction code to deal with the different format. This means some analyses that require both EIA 860 and EIA 923 data (like the calculation of heat rates) can only be performed as far back as 2004 at the moment. See #848

  • There are 387 EIA utilities and 228 EIA palnts which appear in the EIA 923, but which haven’t yet been assigned PUDL IDs and associated with the corresponding utilities and plants reported in the FERC Form 1. These entities show up in the 2001-2008 EIA 923 data that was just integrated. These older plants and utilities can’t yet be used in conjuction with FERC data. When the EIA 860 data for 2001-2003 has been integrated, we will finish this manual ID assignment process. See #848, #1069

  • 52 of the algorithmically assigned plant_id_ferc1 values found in the plants_steam_ferc1 table are currently associated with more than one plant_id_pudl value (99 PUDL plant IDs are involved), indicating either that the algorithm is making poor assignments, or that the manually assigned plant_id_pudl values are incorrect. This is out of several thousand distinct plant_id_ferc1 values. See #954

  • The county FIPS codes associated with coal mines reported in the Fuel Receipts and Costs table are being treated inconsistently in terms of their data types, especially in the output functions, so they are currently being output as floating point numbers that have been cast to strings, rather than zero-padded integers that are strings. See #1119

0.3.2 (2020-02-17)

The primary changes in this release:

  • The 2009-2010 data for EIA 860 have been integrated, including updates to the data validation test cases.

  • Output tables are more uniform and less restrictive in what they include, no longer requiring PUDL Plant & Utility IDs in some tables. This release was used to compile v1.1.0 of the PUDL Data Release, which is archived at Zenodo under this DOI: https://doi.org/10.5281/zenodo.3672068

    With this release, the EIA 860 & 923 data now (finally!) cover the same span of time. We do not anticipate integrating any older EIA 860 or 923 data at this time.

0.3.1 (2020-02-05)

A couple of minor bugs were found in the preparation of the first PUDL data release:

  • No maximum version of Python was being specified in setup.py. PUDL currently only works on Python 3.7, not 3.8.

  • epacems_to_parquet conversion script was erroneously attempting to verify the availability of raw input data files, despite the fact that it now relies on the packaged post-ETL epacems data. Didn’t catch this before since it was always being run in a context where the original data was lying around… but that’s not the case when someone just downloads the released data packages and tries to load them.

0.3.0 (2020-01-30)

This release is mostly about getting the infrastructure in place to do regular data releases via Zenodo, and updating ETL with 2018 data.

Added lots of data validation / quality assurance test cases in anticipation of archiving data. See the pudl.validate module for more details.

New data since v0.2.0 of PUDL:

  • EIA Form 860 for 2018

  • EIA Form 923 for 2018

  • FERC Form 1 for 1994-2003 and 2018 (select tables)

We removed the FERC Form 1 accumulated depreciation table from PUDL because it requires detailed row-mapping in order to be accurate across all the years. It and many other FERC tables will be integrated soon, using new row-mapping methods.

Lots of new plants and utilities integrated into the PUDL ID mapping process, for the earlier years (1994-2003). All years of FERC 1 data should be integrated for all future ferc1 tables.

Command line interfaces of some of the ETL scripts have changed, see their help messages for details.

0.2.0 (2019-09-17)

This is the first release of PUDL to generate data packages as the canonical output, rather than loading data into a local PostgreSQL database. The data packages can then be used to generate a local SQLite database, without relying on any software being installed outside of the Python requirements specified for the catalyst.coop package.

This change will enable easier installation of PUDL, as well as archiving and bulk distribution of the data products in a platform independent format.

0.1.0 (2019-09-12)

This is the only release of PUDL that will be made that makes use of PostgreSQL as the primary data product. It is provided for reference, in case there are users relying on this setup who need access to a well defined release.