Select Page
Josh Kidder
Senior Consultant

Josh Kidder

Senior Consultant

Mr. Josh Kidder is a senior data professional with 17 years of experience working with environmental information and government ordering systems. He has served in many roles, such as software and database developer, data engineer, data modeler, and data architect. His background in mathematics has provided him invaluable problem-solving skills that he has honed over time working in highly demanding environments. Mr. Kidder has expansive knowledge, skills, and understanding of cloud services, business intelligence, requirements analysis, data modeling, data architecting, data warehousing, and data pipelines that he applies in his work with tools such as PostgreSQL, Oracle, PL/pgSQL, R, and...

Mr. Josh Kidder is a senior data professional with 17 years of experience working with environmental information and government ordering systems. He has served in many roles, such as software and database developer, data engineer, data modeler, and data architect. His background in mathematics has provided him invaluable problem-solving skills that he has honed over time working in highly demanding environments. Mr. Kidder has expansive knowledge, skills, and understanding of cloud services, business intelligence, requirements analysis, data modeling, data architecting, data warehousing, and data pipelines that he applies in his work with tools such as PostgreSQL, Oracle, PL/pgSQL, R, and Python.

Read More    Read Less   

Data Management

Real-Time Groundwater Monitoring and Data Management System, Hamilton County, Indiana Led the development efforts for building an application programming interface (API) and database-driven state government data management system that integrates historical data with real-time sensor data from groundwater wells and delivers it to a web-based data dashboard. Designed and implemented data pipelines to ingest historical groundwater data from three public APIs—NOAA, USGS, and NPDES—and a real-time third-party groundwater sensor API into a PostGIS database.
Plume Viewer Data Layer, Hanford, Washington Designed and developed a normalized PostgreSQL database for efficient loading and storage of qualified environmental and radionuclide data using partitioned tables. Also developed optimized database queries to efficiently retrieve and pass spatiotemporal data to a web application for visualizing concentration plumes and dose series timelines, providing time savings benefits to the front-end developers by abstracting the data layer.
Recharge Estimation Tool, Hanford, Washington Redesigned the Recharge Estimation Tool as a spatiotemporal database tool to estimate natural recharge boundary condition for groundwater models. The new design keeps the same input and output formats while improving the run time of the tool by roughly 98 percent and allows for additional output formats, which decreases the overall time to perform quality control checks of a model run. These time saving measures resulted in more time for subject matter experts and scientists to focus on their analysis efforts rather than model integrity, leading to higher value for the client and higher yield for the company.
Hindcast Hurricane Data Layer, Florida Designed, developed, and tested a fully automated data pipeline to improve load time and data integrity. This pipeline extracted, transformed, and loaded hurricane modeling results from NetCDF format into a web application backed by a PostGIS geospatial database. The new process reduced the turn-around time to a few hours yielding a higher margin of profitability for the business group and company.
Integrated Computational Framework Data Layer, Hanford, Washington The integrated computational framework is a multistage workflow management tool, in which the data layer manages the transfer of data between simulation modeling and analysis applications. As the data modeler, designed and developed a normalized PostgreSQL database for storage of environmental, radionuclide, and transport flow modeling data. Also developed more efficient cached views and optimized database procedures to process and calculate dose factors from MODFLOW concentrations.
TRACK Web-Application Database, Albuquerque, New Mexico TRACK is a document management tool that provides an efficient method for tracking of and full text searching within documents. Converted the existing TRACK application, a Microsoft Access® front-end tied to an SQL server backend, into a Python Django web-application with an SQL server backend database. This involved redesigning the data model and developing a database API for the frontend web-application. The database design provides the dynamic ability to create a frontend application by storing user interface components and their database sources within the application schema tables. The database API generates both the frontend application and the application data as JSON structures through stored procedures.
Refinery Database Management System, North Pole, Alaska Designed and developed extraction, transformation, and loading (ETL) processes to optimize the data pre- and post-processing and quality control tasks for a custom geographic information system (GIS) system that provides a variety of geospatial tools for the scientific staff. Also developed complex queries to retrieve specific data for scientific analysis and reporting. The implemented database seamlessly interacts with a custom developed mapping and query tool, GIS, and scientific modeling software.
Geochemical Pit Lake Modeling Application, Winnemucca, Nevada Developed and assisted in the design of a database-driven engine to process and support complex geochemical modeling of pit lake development for mining sites. Functionality includes integrating data from geochemical humidity cell results, curve fitting parameters, oxidation and groundwater flow modeling results to dynamically prepare input files and run the USGS PHREEQC geochemical model. Also developed visualization tools to view thousands of plots to streamline data analysis. This project required database design, integration, data architecture and data modeling, development of ETL packages and scripts. Technologies used included PostgreSQL, Visual Studio .NET, PHREEQC, Python, and PSQL.
Environmental Database Development and Design, Valmy, Nevada Aided in the redesign of an environmental data warehouse, incorporating data system enhancements increasing data integrity and reliability. Reviewed and tested the design and development of the database including table structures, relationships, constraints, indexing, and triggers. The relational database structure had five PostgreSQL schemas containing hundreds of tables housing millions of records of environmental data. Also developed functions to process and house the data from a variety of sources and formats as it is staged through the ETL process.
Government Database Development and Design, Washington, DC Designed a federal agency’s ordering and billing systems data warehouse, incorporating data pipelines, which increased the data integrity and reliability. The Oracle warehouse structure has three schemas containing more than a hundred tables housing billions of order and billing data records. Designed and developed the database, including table structures, relationships, constraints, triggers, and functions. Maintained all revisions to the database structure, functions, and scripts. Developed schemas, table structures and functions to process and house the data as it was staged through the ETL and data pipeline processes. Data from a variety of sources and formats are processed through the system. Created all documentation such as entity relationship diagrams, data dictionary, loading and quality control procedures.
Government Database Cloud Migration, Washington, DC Assisted in migrating the same ordering and billing systems data warehouse to the AWS GovCloud. Identified and documented Oracle and PostgreSQL differences and solutions for objects and procedure languages. Migrated database procedures and objects from Oracle to PostgreSQL. Created a PostgreSQL Docker image for local development use by the data management, software developer, and quality assurance teams.

GIS Analysis

Plume Viewer Data Layer, Hanford, Washington Designed and developed a normalized PostgreSQL database for efficient loading and storage of qualified environmental and radionuclide data using partitioned tables. Also developed optimized database queries to efficiently retrieve and pass spatiotemporal data to a web application for visualizing concentration plumes and dose series timelines, providing time savings benefits to the front-end developers by abstracting the data layer.
Recharge Estimation Tool, Hanford, Washington Redesigned the Recharge Estimation Tool as a spatiotemporal database tool to estimate natural recharge boundary condition for groundwater models. The new design keeps the same input and output formats while improving the run time of the tool by roughly 98 percent and allows for additional output formats, which decreases the overall time to perform quality control checks of a model run. These time saving measures resulted in more time for subject matter experts and scientists to focus on their analysis efforts rather than model integrity, leading to higher value for the client and higher yield for the company.
Hindcast Hurricane Data Layer, Florida Designed, developed, and tested a fully automated data pipeline to improve load time and data integrity. This pipeline extracted, transformed, and loaded hurricane modeling results from NetCDF format into a web application backed by a PostGIS geospatial database. The new process reduced the turn-around time to a few hours yielding a higher margin of profitability for the business group and company.

Groundwater Monitoring

Real-Time Groundwater Monitoring and Data Management System, Hamilton County, Indiana Led the development efforts for building an application programming interface (API) and database-driven state government data management system that integrates historical data with real-time sensor data from groundwater wells and delivers it to a web-based data dashboard. Designed and implemented data pipelines to ingest historical groundwater data from three public APIs—NOAA, USGS, and NPDES—and a real-time third-party groundwater sensor API into a PostGIS database.
RETURN TO OUR TEAM PAGE