Skip to content

Simulation API Overview

The simulation module provides EnergyPlus execution and result parsing.

Quick Reference

Function/Class Description
simulate() Run a single simulation
simulate_batch() Run multiple simulations in parallel
async_simulate() Non-blocking single simulation
async_simulate_batch() Non-blocking parallel simulations
async_simulate_batch_stream() Streaming progress via async generator
SimulationEvent Progress event from streaming batch
find_energyplus() Discover EnergyPlus installation
expand_objects() Expand HVACTemplate:* objects
run_slab_preprocessor() Run the Slab ground heat-transfer preprocessor
run_basement_preprocessor() Run the Basement ground heat-transfer preprocessor
run_preprocessing() Run all needed preprocessors (combined pipeline)
needs_ground_heat_preprocessing() Check if model needs GHT preprocessing
SimulationResult Simulation result container
SimulationJob Job specification for batch runs
BatchResult Aggregated batch results
SQLResult SQL database query interface
SimulationCache Content-addressed result cache
FileSystem Pluggable storage protocol
S3FileSystem Amazon S3 storage backend

Module Contents

EnergyPlus simulation execution and result handling.

Provides subprocess-based simulation execution, EnergyPlus installation discovery, structured result containers, output parsing, and variable discovery.

Example

from idfkit import load_idf from idfkit.simulation import simulate, find_energyplus

model = load_idf("building.idf") result = simulate(model, "weather.epw") print(result.errors.summary())

Parser Coverage

The module provides parsers for the most commonly used EnergyPlus output formats:

  • SQLite (SQLResult): Time-series data, tabular reports, and metadata. This is the recommended output format as it contains all simulation data in a single queryable file.
  • CSV (CSVResult): Time-series data in comma-separated format.
  • HTML (HTMLResult): Tabular reports in HTML format.
  • RDD/MDD (OutputVariableIndex): Available output variables and meters.
  • ERR (ErrorReport): Errors, warnings, and simulation status.

The following formats are intentionally not implemented as the SQLite output covers the same data more reliably and completely:

  • ESO/MTR: Binary-text time-series format (use SQLite instead).
  • EIO: Simulation metadata and invariant outputs (use SQLite instead).

If you have a specific need for these formats, please open an issue describing your use case.

SimulationEvent dataclass

Progress event emitted by async_simulate_batch_stream.

Each event represents a single simulation that has finished (successfully or not). Events are yielded in completion order, not submission order.

Attributes:

Name Type Description
index int

Zero-based position of this job in the original jobs sequence.

label str

Human-readable label from the SimulationJob.

result SimulationResult

The simulation result.

completed int

Number of jobs completed so far (including this one).

total int

Total number of jobs in the batch.

BatchResult dataclass

Aggregated results from a batch simulation run.

Attributes:

Name Type Description
results tuple[SimulationResult, ...]

Simulation results in the same order as the input jobs.

total_runtime_seconds float

Wall-clock time for the entire batch.

succeeded property

Results that completed successfully.

failed property

Results that failed.

all_succeeded property

Whether every job in the batch succeeded.

SimulationJob dataclass

Specification for a single simulation within a batch.

Attributes:

Name Type Description
model object

The EnergyPlus model to simulate.

weather str | Path

Path to the weather file.

label str

Human-readable label for progress reporting.

output_dir str | Path | None

Directory for output files (default: auto temp dir).

expand_objects bool

Run ExpandObjects before simulation.

annual bool

Run annual simulation.

design_day bool

Run design-day-only simulation.

output_prefix str

Prefix for output files.

output_suffix Literal['C', 'L', 'D']

Output file naming suffix ("C", "L", or "D").

readvars bool

Run ReadVarsESO after simulation.

timeout float

Maximum runtime in seconds.

extra_args tuple[str, ...] | None

Additional command-line arguments.

CacheKey dataclass

Opaque cache key wrapping a hex digest string.

SimulationCache

Content-addressed simulation result cache.

Each entry is a directory named by the cache key containing a full copy of the simulation run directory plus a _cache_meta.json manifest.

cache_dir property

Root directory for cached simulation entries.

compute_key(model, weather, *, expand_objects=True, annual=False, design_day=False, output_suffix='C', readvars=False, extra_args=None)

Compute a deterministic cache key for a simulation invocation.

The model is copied and normalised (Output:SQLite is ensured) so that models differing only in the presence of that object produce the same key.

Parameters:

Name Type Description Default
model IDFDocument

The EnergyPlus model.

required
weather str | Path

Path to the weather file.

required
expand_objects bool

Whether ExpandObjects will run.

True
annual bool

Whether annual simulation is used.

False
design_day bool

Whether design-day-only simulation is used.

False
output_suffix Literal['C', 'L', 'D']

Output file naming suffix ("C", "L", or "D").

'C'
readvars bool

Whether ReadVarsESO post-processing will run.

False
extra_args list[str] | tuple[str, ...] | None

Additional command-line arguments.

None

Returns:

Type Description
CacheKey

A CacheKey for use with get / put.

get(key)

Retrieve a cached simulation result.

Parameters:

Name Type Description Default
key CacheKey

Cache key from compute_key.

required

Returns:

Type Description
SimulationResult | None

A SimulationResult if a cache hit exists, otherwise

SimulationResult | None

None.

put(key, result)

Store a successful simulation result in the cache.

Only results with success=True are cached. The entire run directory is copied into the cache atomically.

Parameters:

Name Type Description Default
key CacheKey

Cache key from compute_key.

required
result SimulationResult

Successful simulation result to cache.

required

contains(key)

Check whether a cache entry exists for key.

clear()

Remove all cached entries.

EnergyPlusConfig dataclass

Validated EnergyPlus installation configuration.

Attributes:

Name Type Description
executable Path

Path to the energyplus executable.

version tuple[int, int, int]

Parsed version as (major, minor, patch).

install_dir Path

Root installation directory.

idd_path Path

Path to the Energy+.idd file.

weather_dir property

Path to the bundled WeatherData directory, if present.

schema_path property

Path to Energy+.schema.epJSON, if present.

expand_objects_exe property

Path to ExpandObjects executable, if present.

slab_exe property

Path to the Slab ground heat-transfer preprocessor, if present.

slab_idd property

Path to SlabGHT.idd, if present.

basement_exe property

Path to the Basement ground heat-transfer preprocessor, if present.

basement_idd property

Path to BasementGHT.idd, if present.

from_path(path) classmethod

Create config from an explicit installation path.

The path can point to either the installation directory or the energyplus executable directly.

Parameters:

Name Type Description Default
path str | Path

Path to EnergyPlus install directory or executable.

required

Returns:

Type Description
EnergyPlusConfig

Validated EnergyPlusConfig.

Raises:

Type Description
EnergyPlusNotFoundError

If the path is not a valid installation.

AsyncFileSystem

Bases: Protocol

Protocol for async file system operations used by the async simulation module.

This is the async counterpart to FileSystem. Use this with async_simulate and the async batch functions to avoid blocking the event loop during file I/O — especially important for network-backed storage like S3.

All methods accept str | Path for path arguments.

read_bytes(path) async

Read a file as raw bytes.

Parameters:

Name Type Description Default
path str | Path

Path to the file.

required

Returns:

Type Description
bytes

The file contents as bytes.

write_bytes(path, data) async

Write raw bytes to a file.

Parameters:

Name Type Description Default
path str | Path

Path to the file.

required
data bytes

Bytes to write.

required

read_text(path, encoding='utf-8') async

Read a file as text.

Parameters:

Name Type Description Default
path str | Path

Path to the file.

required
encoding str

Text encoding (default "utf-8").

'utf-8'

Returns:

Type Description
str

The file contents as a string.

write_text(path, text, encoding='utf-8') async

Write text to a file.

Parameters:

Name Type Description Default
path str | Path

Path to the file.

required
text str

Text to write.

required
encoding str

Text encoding (default "utf-8").

'utf-8'

exists(path) async

Check whether a file exists.

Parameters:

Name Type Description Default
path str | Path

Path to check.

required

Returns:

Type Description
bool

True if the file exists.

makedirs(path, *, exist_ok=False) async

Create directories recursively.

Parameters:

Name Type Description Default
path str | Path

Directory path to create.

required
exist_ok bool

If True, do not raise if the directory already exists.

False

copy(src, dst) async

Copy a file from src to dst.

Parameters:

Name Type Description Default
src str | Path

Source file path.

required
dst str | Path

Destination file path.

required

glob(path, pattern) async

List files matching a glob pattern under path.

Parameters:

Name Type Description Default
path str | Path

Base directory.

required
pattern str

Glob pattern (e.g. "*.sql").

required

Returns:

Type Description
list[str]

List of matching file paths as strings.

remove(path) async

Remove a file.

Parameters:

Name Type Description Default
path str | Path

Path to the file to remove.

required

AsyncLocalFileSystem

Non-blocking local file system using asyncio.to_thread.

Wraps LocalFileSystem so that each blocking I/O call runs in the default executor, keeping the event loop free.

read_bytes(path) async

Read a file as raw bytes without blocking the event loop.

write_bytes(path, data) async

Write raw bytes to a file without blocking the event loop.

read_text(path, encoding='utf-8') async

Read a file as text without blocking the event loop.

write_text(path, text, encoding='utf-8') async

Write text to a file without blocking the event loop.

exists(path) async

Check whether a file exists without blocking the event loop.

makedirs(path, *, exist_ok=False) async

Create directories recursively without blocking the event loop.

copy(src, dst) async

Copy a file without blocking the event loop.

glob(path, pattern) async

List files matching a glob pattern without blocking the event loop.

remove(path) async

Remove a file without blocking the event loop.

AsyncS3FileSystem

Async file system implementation backed by Amazon S3 via aiobotocore.

Requires the aiobotocore package (install via pip install idfkit[async-s3]).

This is the non-blocking counterpart to S3FileSystem. Use it with async_simulate and the async batch functions to avoid blocking the event loop during S3 I/O.

The client must be initialised via the async context manager protocol:

```python
async with AsyncS3FileSystem(bucket="my-bucket") as fs:
    result = await async_simulate(model, weather, output_dir="run-001", fs=fs)
```

Parameters:

Name Type Description Default
bucket str

S3 bucket name.

required
prefix str

Optional key prefix prepended to all paths. Use this to namespace simulations (e.g., "project-x/batch-42/").

''
**boto_kwargs Any

Additional keyword arguments passed to session.create_client("s3", ...). Common options include:

  • region_name: AWS region (e.g., "us-east-1")
  • endpoint_url: Custom endpoint for S3-compatible services (MinIO, LocalStack, etc.)
  • aws_access_key_id, aws_secret_access_key: Explicit credentials (normally use IAM roles or environment variables)
{}

Examples:

from idfkit.simulation import AsyncS3FileSystem, async_simulate

async with AsyncS3FileSystem(bucket="my-bucket", prefix="sims/") as fs:
    result = await async_simulate(model, weather, output_dir="run-001", fs=fs)
    errors = await result.async_errors()

read_bytes(path) async

Read a file as raw bytes from S3.

write_bytes(path, data) async

Write raw bytes to S3.

read_text(path, encoding='utf-8') async

Read a file as text from S3.

write_text(path, text, encoding='utf-8') async

Write text to S3.

exists(path) async

Check whether an object exists in S3.

makedirs(path, *, exist_ok=False) async

No-op — S3 has no directory concept.

copy(src, dst) async

Copy an object within the same bucket.

glob(path, pattern) async

List objects matching a glob pattern under path.

Returns logical paths (without the configured S3 prefix) so that they can be passed back to other AsyncS3FileSystem methods which prepend the prefix automatically via _key().

remove(path) async

Delete an object from S3.

FileSystem

Bases: Protocol

Protocol for file system operations used by the simulation module.

All methods accept str | Path for path arguments.

read_bytes(path)

Read a file as raw bytes.

Parameters:

Name Type Description Default
path str | Path

Path to the file.

required

Returns:

Type Description
bytes

The file contents as bytes.

write_bytes(path, data)

Write raw bytes to a file.

Parameters:

Name Type Description Default
path str | Path

Path to the file.

required
data bytes

Bytes to write.

required

read_text(path, encoding='utf-8')

Read a file as text.

Parameters:

Name Type Description Default
path str | Path

Path to the file.

required
encoding str

Text encoding (default "utf-8").

'utf-8'

Returns:

Type Description
str

The file contents as a string.

write_text(path, text, encoding='utf-8')

Write text to a file.

Parameters:

Name Type Description Default
path str | Path

Path to the file.

required
text str

Text to write.

required
encoding str

Text encoding (default "utf-8").

'utf-8'

exists(path)

Check whether a file exists.

Parameters:

Name Type Description Default
path str | Path

Path to check.

required

Returns:

Type Description
bool

True if the file exists.

makedirs(path, *, exist_ok=False)

Create directories recursively.

Parameters:

Name Type Description Default
path str | Path

Directory path to create.

required
exist_ok bool

If True, do not raise if the directory already exists.

False

copy(src, dst)

Copy a file from src to dst.

Parameters:

Name Type Description Default
src str | Path

Source file path.

required
dst str | Path

Destination file path.

required

glob(path, pattern)

List files matching a glob pattern under path.

Parameters:

Name Type Description Default
path str | Path

Base directory.

required
pattern str

Glob pattern (e.g. "*.sql").

required

Returns:

Type Description
list[str]

List of matching file paths as strings.

remove(path)

Remove a file.

Parameters:

Name Type Description Default
path str | Path

Path to the file to remove.

required

LocalFileSystem

File system implementation backed by pathlib and shutil.

read_bytes(path)

Read a file as raw bytes.

write_bytes(path, data)

Write raw bytes to a file.

read_text(path, encoding='utf-8')

Read a file as text.

write_text(path, text, encoding='utf-8')

Write text to a file.

exists(path)

Check whether a file exists.

makedirs(path, *, exist_ok=False)

Create directories recursively.

copy(src, dst)

Copy a file from src to dst.

glob(path, pattern)

List files matching a glob pattern under path.

remove(path)

Remove a file.

S3FileSystem

File system implementation backed by Amazon S3.

Requires the boto3 package (install via pip install idfkit[s3]).

This backend enables cloud-native simulation workflows where results are stored directly in S3 for later retrieval. EnergyPlus runs locally in a temporary directory, then results are uploaded to S3 after completion.

Parameters:

Name Type Description Default
bucket str

S3 bucket name.

required
prefix str

Optional key prefix prepended to all paths. Use this to namespace simulations (e.g., "project-x/batch-42/").

''
**boto_kwargs Any

Additional keyword arguments passed to boto3.client("s3", ...). Common options include:

  • region_name: AWS region (e.g., "us-east-1")
  • endpoint_url: Custom endpoint for S3-compatible services (MinIO, LocalStack, etc.)
  • aws_access_key_id, aws_secret_access_key: Explicit credentials (normally use IAM roles or environment variables)
{}

Examples:

# Basic usage
fs = S3FileSystem(bucket="my-bucket", prefix="simulations/")

# With MinIO (S3-compatible)
fs = S3FileSystem(
    bucket="local-bucket",
    endpoint_url="http://localhost:9000",
    aws_access_key_id="minioadmin",
    aws_secret_access_key="minioadmin",
)

# Use with simulate()
result = simulate(model, weather, output_dir="run-001", fs=fs)

read_bytes(path)

Read a file as raw bytes from S3.

write_bytes(path, data)

Write raw bytes to S3.

read_text(path, encoding='utf-8')

Read a file as text from S3.

write_text(path, text, encoding='utf-8')

Write text to S3.

exists(path)

Check whether an object exists in S3.

makedirs(path, *, exist_ok=False)

No-op — S3 has no directory concept.

copy(src, dst)

Copy an object within the same bucket.

glob(path, pattern)

List objects matching a glob pattern under path.

Returns logical paths (without the configured S3 prefix) so that they can be passed back to other S3FileSystem methods which prepend the prefix automatically via _key().

remove(path)

Delete an object from S3.

OutputVariableIndex dataclass

Index of available output variables and meters for a model.

Constructed from .rdd and .mdd files produced by EnergyPlus during a simulation run. Provides search, filtering, and model injection methods.

Attributes:

Name Type Description
variables tuple[OutputVariable, ...]

Available output variables from the .rdd file.

meters tuple[OutputMeter, ...]

Available output meters from the .mdd file.

from_simulation(result) classmethod

Create an index from a completed simulation result.

Parameters:

Name Type Description Default
result SimulationResult

A SimulationResult with .rdd (and optionally .mdd) files.

required

Returns:

Type Description
OutputVariableIndex

An OutputVariableIndex populated from the simulation output.

Raises:

Type Description
FileNotFoundError

If the .rdd file is not found.

from_files(rdd_path, mdd_path=None) classmethod

Create an index from .rdd and .mdd file paths.

Parameters:

Name Type Description Default
rdd_path str | Path

Path to the .rdd file.

required
mdd_path str | Path | None

Optional path to the .mdd file.

None

Returns:

Type Description
OutputVariableIndex

An OutputVariableIndex populated from the files.

search(pattern)

Search variables and meters by name pattern.

Uses case-insensitive regex matching against variable/meter names.

Parameters:

Name Type Description Default
pattern str

A regex pattern to match against names.

required

Returns:

Type Description
list[OutputVariable | OutputMeter]

List of matching OutputVariable and OutputMeter entries.

filter_by_units(units)

Filter variables and meters by unit type.

Parameters:

Name Type Description Default
units str

The unit string to filter by (case-insensitive).

required

Returns:

Type Description
list[OutputVariable | OutputMeter]

List of matching OutputVariable and OutputMeter entries.

add_all_to_model(model, *, frequency='Timestep', filter_pattern=None)

Add output variables and meters to a model.

Parameters:

Name Type Description Default
model IDFDocument

The IDFDocument to add outputs to.

required
frequency str

The reporting frequency for all added outputs.

'Timestep'
filter_pattern str | None

Optional regex pattern to filter which variables and meters are added (case-insensitive match on name).

None

Returns:

Type Description
int

The number of output objects added.

CSVColumn dataclass

A single data column from a CSV output file.

Attributes:

Name Type Description
header str

The raw column header string.

variable_name str

Parsed variable name.

key_value str

Parsed key value (e.g. "Environment").

units str

Parsed units string.

values tuple[float, ...]

The numeric values for this column.

CSVResult dataclass

Parsed EnergyPlus CSV output file.

Attributes:

Name Type Description
timestamps tuple[str, ...]

The timestamp strings from the Date/Time column.

columns tuple[CSVColumn, ...]

Parsed data columns with extracted metadata.

from_file(path) classmethod

Parse a CSV output file from disk.

Parameters:

Name Type Description Default
path str | Path

Path to the CSV file.

required

Returns:

Type Description
CSVResult

Parsed CSVResult.

from_string(text) classmethod

Parse CSV output from a string.

Parameters:

Name Type Description Default
text str

Raw CSV file contents.

required

Returns:

Type Description
CSVResult

Parsed CSVResult.

get_column(variable_name, key_value=None)

Find a column by variable name and optional key value.

Parameters:

Name Type Description Default
variable_name str

The variable name to search for (case-insensitive).

required
key_value str | None

Optional key value filter (case-insensitive).

None

Returns:

Type Description
CSVColumn | None

The matching CSVColumn, or None if not found.

ErrorMessage dataclass

A single error/warning message from EnergyPlus.

Attributes:

Name Type Description
severity str

One of "Fatal", "Severe", "Warning", "Info".

message str

The primary message text.

details tuple[str, ...]

Additional continuation lines (** ~~~ ** lines).

ErrorReport dataclass

Parsed contents of an EnergyPlus .err file.

Attributes:

Name Type Description
fatal tuple[ErrorMessage, ...]

Fatal error messages.

severe tuple[ErrorMessage, ...]

Severe error messages.

warnings tuple[ErrorMessage, ...]

Warning messages.

info tuple[ErrorMessage, ...]

Informational messages.

warmup_converged bool

Whether warmup convergence was reported.

simulation_complete bool

Whether the simulation completed successfully.

raw_text str

The original unparsed file text.

has_fatal property

Whether any fatal errors were found.

has_severe property

Whether any severe errors were found.

fatal_count property

Number of fatal errors.

severe_count property

Number of severe errors.

error_count property

Total number of fatal + severe errors.

warning_count property

Total number of warnings.

summary()

Return a human-readable summary of the error report.

Returns:

Type Description
str

A multi-line summary string.

from_file(path) classmethod

Parse an .err file from disk.

Parameters:

Name Type Description Default
path str | Path

Path to the .err file.

required

Returns:

Type Description
ErrorReport

Parsed ErrorReport.

from_string(text) classmethod

Parse .err content from a string.

Parameters:

Name Type Description Default
text str

Raw .err file contents.

required

Returns:

Type Description
ErrorReport

Parsed ErrorReport.

OutputMeter dataclass

An available meter from a .mdd file.

Meters aggregate energy or resource consumption and have no key value, unlike OutputVariable. For post-simulation SQL results where variables and meters are stored together, see VariableInfo.

Attributes:

Name Type Description
name str

The meter name (e.g. "Electricity:Facility").

frequency str

The default reporting frequency (e.g. "hourly").

units str

The meter units (e.g. "J").

OutputVariable dataclass

An available output variable from a .rdd file.

Unlike meters, variables are associated with a specific key (zone, surface, etc.). For post-simulation SQL results where variables and meters are stored together, see VariableInfo.

Attributes:

Name Type Description
key str

The key value (e.g. "*" or "ZONE 1").

name str

The variable name (e.g. "Zone Mean Air Temperature").

frequency str

The default reporting frequency (e.g. "hourly").

units str

The variable units (e.g. "C", "W").

EnvironmentInfo dataclass

Metadata about a simulation environment period.

Attributes:

Name Type Description
index int

The environment period index in the database.

name str

The environment name (e.g. "RUN PERIOD 1").

environment_type int

The type integer (1 = DesignDay, 2 = DesignRunPeriod, 3 = WeatherFileRunPeriod).

SQLResult

Query interface for an EnergyPlus SQLite output database.

Opens the database in read-only mode and provides methods for retrieving time-series data, tabular reports, and variable metadata.

Can be used as a context manager:

```python
with SQLResult("eplusout.sql") as sql:
    ts = sql.get_timeseries("Zone Mean Air Temperature", "ZONE 1")
```

close()

Close the database connection.

get_timeseries(variable_name, key_value='*', frequency=None, environment=None)

Retrieve a time series for a variable.

Parameters:

Name Type Description Default
variable_name str

The output variable name.

required
key_value str

The key value (e.g. zone name). Use "*" for environment-level variables. Case-insensitive matching.

'*'
frequency str | None

Optional frequency filter (e.g. "Hourly").

None
environment Environment | None

Filter by environment type. None (default) returns all data, "annual" returns only weather-file run period data, and "sizing" returns only design-day data.

None

Returns:

Type Description
TimeSeriesResult

A TimeSeriesResult with timestamps and values.

Raises:

Type Description
KeyError

If the variable is not found in the database.

ValueError

If environment is not a recognized value.

get_tabular_data(report_name=None, table_name=None, row_name=None, column_name=None, report_for=None)

Retrieve tabular report data.

Parameters:

Name Type Description Default
report_name str | None

Optional filter by report name.

None
table_name str | None

Optional filter by table name.

None
row_name str | None

Optional filter by row label.

None
column_name str | None

Optional filter by column label.

None
report_for str | None

Optional filter by report scope (e.g. "Entire Facility").

None

Returns:

Type Description
list[TabularRow]

List of TabularRow entries matching the filters.

get_tabular_value(report_name, table_name, row_name, column_name, report_for='Entire Facility')

Retrieve a single tabular cell value.

Convenience wrapper around get_tabular_data that returns exactly one cell value.

Parameters:

Name Type Description Default
report_name str

Report name (e.g. "AnnualBuildingUtilityPerformanceSummary").

required
table_name str

Table name (e.g. "End Uses").

required
row_name str

Row label (e.g. "Heating").

required
column_name str

Column label (e.g. "Electricity").

required
report_for str

Report scope (default "Entire Facility").

'Entire Facility'

Returns:

Type Description
str

The cell value as a string.

Raises:

Type Description
KeyError

If no matching row is found or if multiple rows match.

list_variables()

List all available variables in the database.

Returns:

Type Description
list[VariableInfo]

List of VariableInfo entries describing each variable.

list_environments()

List all environment periods in the database.

Returns:

Type Description
list[EnvironmentInfo]

List of EnvironmentInfo entries describing each period (e.g.

list[EnvironmentInfo]

design days and run periods).

list_reports()

List all available tabular report names.

Returns:

Type Description
list[str]

Sorted list of unique report names.

to_dataframe(variable_name, key_value='*', frequency=None, environment=None)

Retrieve a time series as a pandas DataFrame.

This is a convenience wrapper around get_timeseries that directly returns a DataFrame.

Parameters:

Name Type Description Default
variable_name str

The output variable name.

required
key_value str

The key value. Use "*" for environment-level variables.

'*'
frequency str | None

Optional frequency filter.

None
environment Environment | None

Filter by environment type (None by default for all data, "annual" for run periods, "sizing" for design days).

None

Returns:

Type Description
Any

A pandas DataFrame with a timestamp index.

Raises:

Type Description
ImportError

If pandas is not installed.

KeyError

If the variable is not found.

query(sql, parameters=())

Execute a raw SQL query.

Parameters:

Name Type Description Default
sql str

The SQL query string.

required
parameters tuple[object, ...]

Query parameters for parameterized queries.

()

Returns:

Type Description
list[tuple[object, ...]]

List of result tuples.

TabularRow dataclass

A single row from an EnergyPlus tabular report.

Attributes:

Name Type Description
report_name str

The report name (e.g. "AnnualBuildingUtilityPerformanceSummary").

report_for str

The report scope (e.g. "Entire Facility").

table_name str

The table name within the report.

row_name str

The row label.

column_name str

The column label.

units str

The value units.

value str

The cell value as a string.

TimeSeriesResult dataclass

A single time series extracted from an EnergyPlus SQL database.

Attributes:

Name Type Description
variable_name str

The output variable name.

key_value str

The key value (e.g. zone or surface name).

units str

The variable units.

frequency str

The reporting frequency.

timestamps tuple[datetime, ...]

Timestamp for each data point.

values tuple[float, ...]

Numeric values for each data point.

to_dataframe()

Convert to a pandas DataFrame.

Requires pandas to be installed.

Returns:

Type Description
Any

A DataFrame with a timestamp index and a column for the values.

Raises:

Type Description
ImportError

If pandas is not installed.

plot(*, backend=None, title=None)

Plot this time series as a line chart.

Auto-detects the plotting backend if not provided. Requires matplotlib or plotly to be installed.

Parameters:

Name Type Description Default
backend Any

A PlotBackend instance. If not provided, auto-detects.

None
title str | None

Optional plot title. Defaults to "key_value: variable_name".

None

Returns:

Type Description
Any

A figure object from the backend.

Raises:

Type Description
ImportError

If no plotting backend is available.

VariableInfo dataclass

Metadata about an available variable or meter in the SQL database.

This class represents both regular variables and meters because EnergyPlus stores them in a single ReportDataDictionary table, distinguished only by an IsMeter column. Use the is_meter flag to tell them apart.

For pre-simulation discovery from .rdd / .mdd files, see the separate OutputVariable and OutputMeter classes instead.

Attributes:

Name Type Description
name str

The variable name.

key_value str

The key value. Empty for meters.

frequency str

The reporting frequency.

units str

The variable units.

is_meter bool

Whether this is a meter (vs. a regular variable).

variable_type str

The variable type string (e.g. "Zone", "HVAC").

PlotBackend

Bases: Protocol

Protocol for plotting backends used by the simulation module.

Implementations must provide methods for common chart types. Each method returns a figure object native to the backend (e.g. matplotlib Figure or plotly Figure).

line(x, y, *, title=None, xlabel=None, ylabel=None, label=None)

Create a single line plot.

Parameters:

Name Type Description Default
x Sequence[Any]

X-axis values (e.g. timestamps).

required
y Sequence[float]

Y-axis values.

required
title str | None

Optional plot title.

None
xlabel str | None

Optional X-axis label.

None
ylabel str | None

Optional Y-axis label.

None
label str | None

Optional line label for legend.

None

Returns:

Type Description
Any

A figure object native to the backend.

multi_line(x, y_series, *, title=None, xlabel=None, ylabel=None)

Create a multi-line plot with legend.

Parameters:

Name Type Description Default
x Sequence[Any]

Shared X-axis values.

required
y_series dict[str, Sequence[float]]

Mapping of label to Y values for each line.

required
title str | None

Optional plot title.

None
xlabel str | None

Optional X-axis label.

None
ylabel str | None

Optional Y-axis label.

None

Returns:

Type Description
Any

A figure object native to the backend.

heatmap(data, *, x_labels=None, y_labels=None, title=None, colorbar_label=None)

Create a 2D heatmap.

Parameters:

Name Type Description Default
data Sequence[Sequence[float]]

2D array of values (rows, columns).

required
x_labels Sequence[str] | None

Optional labels for columns.

None
y_labels Sequence[str] | None

Optional labels for rows.

None
title str | None

Optional plot title.

None
colorbar_label str | None

Optional label for the colorbar.

None

Returns:

Type Description
Any

A figure object native to the backend.

bar(categories, values, *, title=None, xlabel=None, ylabel=None)

Create a bar chart.

Parameters:

Name Type Description Default
categories Sequence[str]

Category labels for each bar.

required
values Sequence[float]

Values for each bar.

required
title str | None

Optional plot title.

None
xlabel str | None

Optional X-axis label.

None
ylabel str | None

Optional Y-axis label.

None

Returns:

Type Description
Any

A figure object native to the backend.

stacked_bar(categories, series, *, title=None, xlabel=None, ylabel=None)

Create a stacked bar chart.

Parameters:

Name Type Description Default
categories Sequence[str]

Category labels for each bar group.

required
series dict[str, Sequence[float]]

Mapping of series label to values.

required
title str | None

Optional plot title.

None
xlabel str | None

Optional X-axis label.

None
ylabel str | None

Optional Y-axis label.

None

Returns:

Type Description
Any

A figure object native to the backend.

ProgressParser

Parse EnergyPlus stdout lines into SimulationProgress events.

Maintains internal state to track the current environment, warmup iteration count, and simulation day for percentage estimation.

A new instance should be created for each simulation run. The parser is designed to be defensive — unrecognised lines return None and never raise.

Examples:

parser = ProgressParser()
for line in energyplus_stdout_lines:
    event = parser.parse_line(line)
    if event is not None:
        print(event.phase, event.percent)

set_job_context(index, label)

Set batch job context that will be included in all emitted events.

Parameters:

Name Type Description Default
index int

Job index within the batch.

required
label str

Human-readable job label.

required

parse_line(line)

Parse a single stdout line into a progress event.

Parameters:

Name Type Description Default
line str

A single line from EnergyPlus stdout.

required

Returns:

Type Description
SimulationProgress | None

A SimulationProgress event, or None if the line

SimulationProgress | None

does not contain progress information.

SimulationProgress dataclass

Progress event emitted during a single EnergyPlus simulation.

This dataclass represents a progress update parsed from EnergyPlus stdout output. It is passed to user-supplied on_progress callbacks on simulate and async_simulate.

Attributes:

Name Type Description
phase Literal['preprocessing', 'initializing', 'warmup', 'simulating', 'postprocessing', 'complete']

Current simulation phase.

message str

Raw EnergyPlus stdout line (stripped).

percent float | None

Estimated completion percentage (0.0-100.0), or None when progress is indeterminate (e.g. during warmup).

environment str | None

Name of the current simulation environment, if known.

warmup_day int | None

Current warmup iteration (1-based), only set during the "warmup" phase.

sim_day int | None

Current simulation day-of-year (1-based), only set during the "simulating" phase.

sim_total_days int | None

Total number of simulation days, only set when the simulation period is known.

job_index int | None

Index of this job in a batch, or None for single simulations.

job_label str | None

Label of this job in a batch, or None for single simulations.

SimulationResult dataclass

Result of an EnergyPlus simulation run.

Attributes:

Name Type Description
run_dir Path

Directory containing all simulation output.

success bool

Whether the simulation exited successfully.

exit_code int | None

Process exit code (None if timed out).

stdout str

Captured standard output.

stderr str

Captured standard error.

runtime_seconds float

Wall-clock execution time in seconds.

output_prefix str

Output file prefix (default "eplus").

fs FileSystem | None

Optional sync file system backend for reading output files.

async_fs AsyncFileSystem | None

Optional async file system backend for non-blocking reads. Set automatically by async_simulate when an AsyncFileSystem is provided.

errors property

Parsed error report from the .err file (lazily cached).

Returns:

Type Description
ErrorReport

Parsed ErrorReport from the simulation's .err output.

sql property

Parsed SQL output database (lazily cached).

Returns:

Type Description
SQLResult | None

An SQLResult for querying time-series and tabular data,

SQLResult | None

or None if no .sql file was produced.

variables property

Output variable/meter index from .rdd/.mdd files (lazily cached).

Returns:

Type Description
OutputVariableIndex | None

An OutputVariableIndex for searching and injecting output

OutputVariableIndex | None

variables, or None if no .rdd file was produced.

csv property

Parsed CSV output (lazily cached).

Returns:

Type Description
CSVResult | None

A CSVResult with extracted column metadata and values,

CSVResult | None

or None if no .csv file was produced.

html property

Parsed HTML tabular output (lazily cached).

Returns:

Type Description
HTMLResult | None

An HTMLResult with extracted tables and titles,

HTMLResult | None

or None if no HTML file was produced.

sql_path property

Path to the SQLite output file, if present.

err_path property

Path to the .err output file, if present.

eso_path property

Path to the .eso output file, if present.

csv_path property

Path to the .csv output file, if present.

html_path property

Path to the HTML tabular output file, if present.

rdd_path property

Path to the .rdd output file, if present.

mdd_path property

Path to the .mdd output file, if present.

async_errors() async

Parsed error report from the .err file (async, lazily cached).

Non-blocking counterpart to errors that uses async_fs for file reads.

Returns:

Type Description
ErrorReport

Parsed ErrorReport from the simulation's .err output.

async_sql() async

Parsed SQL output database (async, lazily cached).

Non-blocking counterpart to sql that uses async_fs for file reads.

Returns:

Type Description
SQLResult | None

An SQLResult for querying time-series and tabular data,

SQLResult | None

or None if no .sql file was produced.

async_variables() async

Output variable/meter index (async, lazily cached).

Non-blocking counterpart to variables that uses async_fs for file reads.

Returns:

Type Description
OutputVariableIndex | None

An OutputVariableIndex for searching and injecting output

OutputVariableIndex | None

variables, or None if no .rdd file was produced.

async_csv() async

Parsed CSV output (async, lazily cached).

Non-blocking counterpart to csv that uses async_fs for file reads.

Returns:

Type Description
CSVResult | None

A CSVResult with extracted column metadata and values,

CSVResult | None

or None if no .csv file was produced.

async_html() async

Parsed HTML tabular output (async, lazily cached).

Non-blocking counterpart to html that uses async_fs for file reads.

Returns:

Type Description
HTMLResult | None

An HTMLResult with extracted tables and titles,

HTMLResult | None

or None if no HTML file was produced.

from_directory(path, *, output_prefix='eplus', fs=None, async_fs=None) classmethod

Reconstruct a SimulationResult from an existing output directory.

Useful for inspecting results from a previous simulation run.

Parameters:

Name Type Description Default
path str | Path

Path to the simulation output directory.

required
output_prefix str

Output file prefix used during the run.

'eplus'
fs FileSystem | None

Optional sync file system backend for reading output files.

None
async_fs AsyncFileSystem | None

Optional async file system backend for non-blocking reads.

None

Returns:

Type Description
SimulationResult

SimulationResult pointing to the existing output.

prep_outputs(model)

Add standard output objects to the model if not already present.

Ensures the model includes:

  • Output:SQLite (SimpleAndTabular) — for SQL-based result queries
  • Output:Table:SummaryReports (AllSummary) — for tabular reports
  • Output:VariableDictionary (Regular) — for .rdd / .mdd generation

This is a superset of ensure_sql_output.

Parameters:

Name Type Description Default
model IDFDocument

The model to modify in place.

required

async_simulate_batch(jobs, *, energyplus=None, max_concurrent=None, cache=None, fs=None, on_progress=None) async

Run multiple EnergyPlus simulations concurrently using asyncio.

This is the async counterpart to simulate_batch. Concurrency is controlled with an asyncio.Semaphore instead of a thread pool.

Individual job failures are captured as failed SimulationResult entries -- the batch never raises due to a single job failing.

Parameters:

Name Type Description Default
jobs Sequence[SimulationJob]

Sequence of simulation jobs to execute.

required
energyplus EnergyPlusConfig | None

Shared EnergyPlus configuration (auto-discovered if None).

None
max_concurrent int | None

Maximum number of concurrent simulations. Defaults to min(len(jobs), os.cpu_count() or 1).

None
cache SimulationCache | None

Optional simulation cache for content-hash lookups.

None
fs FileSystem | AsyncFileSystem | None

Optional file system backend passed through to each async_simulate call.

None
on_progress Callable[[SimulationProgress], Any] | None

Optional callback invoked with SimulationProgress events during each individual simulation. Events include job_index and job_label to identify which batch job they belong to. Both sync and async callables are accepted. The "tqdm" shorthand is not supported for batch runners; use tqdm_progress with a custom per-job callback instead.

None

Returns:

Type Description
BatchResult

A BatchResult with results in the

BatchResult

same order as jobs.

Raises:

Type Description
ValueError

If jobs is empty.

async_simulate_batch_stream(jobs, *, energyplus=None, max_concurrent=None, cache=None, fs=None, on_progress=None) async

Run simulations concurrently, yielding events as each one completes.

This is an async generator variant of async_simulate_batch that yields SimulationEvent objects in completion order. This enables real-time progress reporting without needing a callback:

.. code-block:: python

async for event in async_simulate_batch_stream(jobs, max_concurrent=4):
    print(f"[{event.completed}/{event.total}] {event.label}")

Parameters:

Name Type Description Default
jobs Sequence[SimulationJob]

Sequence of simulation jobs to execute.

required
energyplus EnergyPlusConfig | None

Shared EnergyPlus configuration (auto-discovered if None).

None
max_concurrent int | None

Maximum number of concurrent simulations. Defaults to min(len(jobs), os.cpu_count() or 1).

None
cache SimulationCache | None

Optional simulation cache for content-hash lookups.

None
fs FileSystem | AsyncFileSystem | None

Optional file system backend.

None
on_progress Callable[[SimulationProgress], Any] | None

Optional callback invoked with SimulationProgress events during each individual simulation. Events include job_index and job_label. The "tqdm" shorthand is not supported for batch runners; use tqdm_progress with a custom per-job callback instead.

None

Yields:

Type Description
AsyncIterator[SimulationEvent]

SimulationEvent for each completed simulation, in the order

AsyncIterator[SimulationEvent]

they finish.

Raises:

Type Description
ValueError

If jobs is empty.

async_simulate(model, weather, *, output_dir=None, energyplus=None, expand_objects=True, annual=False, design_day=False, output_prefix='eplus', output_suffix='C', readvars=False, timeout=3600.0, extra_args=None, cache=None, fs=None, on_progress=None) async

Run an EnergyPlus simulation without blocking the event loop.

This is the async counterpart to simulate. All parameters and return values are identical; the only difference is that EnergyPlus runs as an asyncio subprocess, allowing the caller to await the result while other coroutines continue executing.

Cancellation is supported: if the wrapping asyncio.Task is cancelled, the EnergyPlus subprocess is killed and cleaned up.

Parameters:

Name Type Description Default
model IDFDocument

The EnergyPlus model to simulate.

required
weather str | Path

Path to the weather file (.epw).

required
output_dir str | Path | None

Directory for output files (default: auto temp dir).

None
energyplus EnergyPlusConfig | None

Pre-configured EnergyPlus installation. If None, uses find_energyplus for auto-discovery.

None
expand_objects bool

Run ExpandObjects before simulation. When True, also runs the Slab and Basement ground heat-transfer preprocessors if the model contains the corresponding objects.

True
annual bool

Run annual simulation (-a flag).

False
design_day bool

Run design-day-only simulation (-D flag).

False
output_prefix str

Prefix for output files (default "eplus").

'eplus'
output_suffix Literal['C', 'L', 'D']

Output file naming suffix: "C" for combined table files (default), "L" for legacy separate table files, or "D" for timestamped separate files.

'C'
readvars bool

Run ReadVarsESO after simulation (-r flag).

False
timeout float

Maximum runtime in seconds (default 3600).

3600.0
extra_args list[str] | None

Additional command-line arguments.

None
cache SimulationCache | None

Optional simulation cache for content-hash lookups.

None
fs FileSystem | AsyncFileSystem | None

Optional file system backend for storing results on remote storage (e.g., S3). Both sync FileSystem and async AsyncFileSystem are accepted. When an AsyncFileSystem is provided, uploads and result reads are truly non-blocking. A sync FileSystem is automatically wrapped in asyncio.to_thread to avoid blocking the event loop.

None
on_progress Callable[[SimulationProgress], Any] | Literal['tqdm'] | None

Optional callback invoked with a SimulationProgress event each time EnergyPlus emits a progress line. Both synchronous and async callables are accepted -- async callables are awaited. Pass "tqdm" to use a built-in tqdm progress bar (requires pip install idfkit[progress]).

None

Returns:

Type Description
SimulationResult

SimulationResult with paths to output files.

Raises:

Type Description
SimulationError

On timeout, OS error, or missing weather file.

ExpandObjectsError

If a preprocessing step fails.

EnergyPlusNotFoundError

If EnergyPlus cannot be found.

simulate_batch(jobs, *, energyplus=None, max_workers=None, cache=None, progress=None, fs=None, on_progress=None)

Run multiple EnergyPlus simulations in parallel.

Uses ThreadPoolExecutor to dispatch simulations concurrently. Individual job failures are captured as failed SimulationResult entries -- the batch never raises due to a single job failing.

Parameters:

Name Type Description Default
jobs Sequence[SimulationJob]

Sequence of simulation jobs to execute.

required
energyplus EnergyPlusConfig | None

Shared EnergyPlus configuration (auto-discovered if None).

None
max_workers int | None

Maximum number of concurrent simulations. Defaults to min(len(jobs), os.cpu_count() or 1).

None
cache SimulationCache | None

Optional simulation cache for content-hash lookups.

None
progress Callable[..., None] | None

Optional callback invoked after each job completes. Called as progress(completed=N, total=M, label=label, success=bool).

None
fs FileSystem | None

Optional file system backend passed through to each simulate call.

None
on_progress Callable[[SimulationProgress], None] | None

Optional callback invoked with SimulationProgress events during each individual simulation. Events include job_index and job_label to identify which batch job they belong to. The "tqdm" shorthand is not supported for batch runners; use tqdm_progress with a custom per-job callback instead.

None

Returns:

Type Description
BatchResult

A BatchResult with results in the same order as jobs.

Raises:

Type Description
ValueError

If jobs is empty.

find_energyplus(*, version=None, path=None)

Find an EnergyPlus installation.

Discovery order
  1. Explicit path argument.
  2. ENERGYPLUS_DIR environment variable.
  3. energyplus on PATH (via shutil.which).
  4. Platform-specific default directories (newest version first).

Parameters:

Name Type Description Default
version tuple[int, int, int] | str | None

Optional version filter. Accepts (major, minor, patch) tuple or a string like "24.1.0" or "24.1".

None
path str | Path | None

Explicit path to EnergyPlus install directory or executable.

None

Returns:

Type Description
EnergyPlusConfig

Validated EnergyPlusConfig.

Raises:

Type Description
EnergyPlusNotFoundError

If no matching installation is found.

expand_objects(model, *, energyplus=None, timeout=120.0)

Run the EnergyPlus ExpandObjects preprocessor and return the expanded document.

ExpandObjects replaces HVACTemplate:* objects with their fully specified low-level HVAC equivalents. The original model is not mutated.

If the document contains no HVACTemplate:* objects a copy is returned immediately without invoking the preprocessor (no EnergyPlus installation required).

Parameters:

Name Type Description Default
model IDFDocument

The EnergyPlus model to expand.

required
energyplus EnergyPlusConfig | None

Pre-configured EnergyPlus installation. If None, find_energyplus is used for auto-discovery.

None
timeout float

Maximum time in seconds to wait for the preprocessor (default 120).

120.0

Returns:

Type Description
IDFDocument

A new IDFDocument containing the expanded

IDFDocument

objects.

Raises:

Type Description
EnergyPlusNotFoundError

If no EnergyPlus installation (and therefore no ExpandObjects executable) can be found.

ExpandObjectsError

If the ExpandObjects executable is missing from the installation or the preprocessor exits with an error.

needs_ground_heat_preprocessing(model)

Return True if model contains ground heat-transfer objects.

Checks for GroundHeatTransfer:Slab:* and GroundHeatTransfer:Basement:* objects that require the Slab or Basement preprocessor before simulation.

This is used by simulate to decide whether to auto-run the preprocessing pipeline.

run_basement_preprocessor(model, *, energyplus=None, weather=None, timeout=120.0)

Run the Basement ground heat-transfer preprocessor and return the expanded document.

The workflow is:

  1. ExpandObjects extracts GroundHeatTransfer:Basement:* objects from the model into BasementGHTIn.idf.
  2. The Basement preprocessor reads BasementGHTIn.idf and computes ground temperatures, writing EPObjects.TXT.
  3. The resulting boundary conditions are appended to the expanded IDF.

The original model is not mutated.

If the document contains no GroundHeatTransfer:Basement:* objects a copy is returned immediately.

Parameters:

Name Type Description Default
model IDFDocument

The EnergyPlus model containing GroundHeatTransfer:Basement:* objects.

required
energyplus EnergyPlusConfig | None

Pre-configured EnergyPlus installation. If None, auto-discovery is used.

None
weather str | Path | None

Path to a weather file (.epw). The Basement preprocessor requires weather data to compute ground temperatures.

None
timeout float

Maximum time in seconds for each subprocess invocation (default 120).

120.0

Returns:

Type Description
IDFDocument

A new IDFDocument with basement ground

IDFDocument

temperatures appended.

Raises:

Type Description
EnergyPlusNotFoundError

If no EnergyPlus installation is found.

ExpandObjectsError

If any preprocessor step fails.

run_preprocessing(model, *, energyplus=None, weather=None, timeout=120.0)

Run ExpandObjects and any required ground heat-transfer preprocessors.

This is a convenience function that runs all needed preprocessors in a single call. It runs ExpandObjects once, then checks which preprocessor input files were produced (GHTIn.idf and/or BasementGHTIn.idf) and runs the corresponding Fortran solvers.

simulate calls this automatically when the model contains ground heat-transfer objects and expand_objects is True. Call it directly only when you need to inspect or modify the preprocessed model before simulation.

Parameters:

Name Type Description Default
model IDFDocument

The EnergyPlus model to preprocess.

required
energyplus EnergyPlusConfig | None

Pre-configured EnergyPlus installation. If None, auto-discovery is used.

None
weather str | Path | None

Path to a weather file (.epw). Required by the Slab and Basement solvers.

None
timeout float

Maximum time in seconds for each subprocess invocation (default 120).

120.0

Returns:

Type Description
IDFDocument

A new IDFDocument with all

IDFDocument

preprocessing applied.

Raises:

Type Description
EnergyPlusNotFoundError

If no EnergyPlus installation is found.

ExpandObjectsError

If any preprocessor step fails.

run_slab_preprocessor(model, *, energyplus=None, weather=None, timeout=120.0)

Run the Slab ground heat-transfer preprocessor and return the expanded document.

The workflow is:

  1. ExpandObjects extracts GroundHeatTransfer:Slab:* objects from the model into GHTIn.idf.
  2. The Slab preprocessor reads GHTIn.idf and computes monthly ground surface temperatures, writing SLABSurfaceTemps.TXT.
  3. The resulting temperature schedules are appended to the expanded IDF.

The original model is not mutated.

If the document contains no GroundHeatTransfer:Slab:* objects a copy is returned immediately.

Parameters:

Name Type Description Default
model IDFDocument

The EnergyPlus model containing GroundHeatTransfer:Slab:* objects.

required
energyplus EnergyPlusConfig | None

Pre-configured EnergyPlus installation. If None, auto-discovery is used.

None
weather str | Path | None

Path to a weather file (.epw). Some Slab configurations may require weather data.

None
timeout float

Maximum time in seconds for each subprocess invocation (default 120).

120.0

Returns:

Type Description
IDFDocument

A new IDFDocument with slab ground

IDFDocument

temperatures appended.

Raises:

Type Description
EnergyPlusNotFoundError

If no EnergyPlus installation is found.

ExpandObjectsError

If any preprocessor step fails.

get_default_backend()

Auto-detect and return an available plotting backend.

Tries matplotlib first, then plotly. Raises ImportError if neither is available.

Returns:

Type Description
PlotBackend

A PlotBackend instance.

Raises:

Type Description
ImportError

If neither matplotlib nor plotly is installed.

plot_comfort_hours(sql, zones, *, comfort_min=20.0, comfort_max=26.0, backend=None, title='Comfort Hours by Zone and Month')

Create a heatmap of comfort hours by zone and month.

For each zone, calculates the percentage of hours within the comfort range for each month and displays as a heatmap.

Parameters:

Name Type Description Default
sql SQLResult

An open SQLResult database.

required
zones Sequence[str]

Zone names to analyze.

required
comfort_min float

Minimum comfort temperature (default 20C).

20.0
comfort_max float

Maximum comfort temperature (default 26C).

26.0
backend PlotBackend | None

Plotting backend to use. Auto-detects if not provided.

None
title str

Plot title.

'Comfort Hours by Zone and Month'

Returns:

Type Description
Any

A figure object from the backend.

plot_energy_balance(sql, *, backend=None, title='End-Use Energy by Category')

Create a bar chart of end-use energy consumption.

Extracts data from the AnnualBuildingUtilityPerformanceSummary report and plots energy consumption by end-use category.

Parameters:

Name Type Description Default
sql SQLResult

An open SQLResult database.

required
backend PlotBackend | None

Plotting backend to use. Auto-detects if not provided.

None
title str

Plot title.

'End-Use Energy by Category'

Returns:

Type Description
Any

A figure object from the backend.

plot_temperature_profile(sql, zones, *, backend=None, title='Zone Air Temperatures', frequency=None)

Create a multi-line plot of zone air temperatures.

Queries Zone Mean Air Temperature for each specified zone and plots them on a shared time axis.

Parameters:

Name Type Description Default
sql SQLResult

An open SQLResult database.

required
zones Sequence[str]

Zone names to plot.

required
backend PlotBackend | None

Plotting backend to use. Auto-detects if not provided.

None
title str

Plot title.

'Zone Air Temperatures'
frequency str | None

Optional frequency filter (e.g. "Hourly").

None

Returns:

Type Description
Any

A figure object from the backend.

tqdm_progress(*, desc='Simulating', bar_format='{l_bar}{bar}| {n:.0f}/{total_fmt}% [{elapsed}<{remaining}]', leave=True, position=None, file=None, **tqdm_kwargs)

Context manager that yields a tqdm-based on_progress callback.

The progress bar is automatically closed when the context exits, even if an exception is raised.

Parameters:

Name Type Description Default
desc str

Progress bar description (left label).

'Simulating'
bar_format str

tqdm bar format string. The default shows percentage, elapsed and estimated remaining time.

'{l_bar}{bar}| {n:.0f}/{total_fmt}% [{elapsed}<{remaining}]'
leave bool

Whether the bar remains visible after completion.

True
position int | None

Line position for the bar (useful for nested bars).

None
file Any

Output stream (default: sys.stderr).

None
**tqdm_kwargs Any

Extra keyword arguments forwarded to tqdm.tqdm.

{}

Yields:

Type Description
Callable[[SimulationProgress], None]

A callback suitable for the on_progress parameter.

Raises:

Type Description
ImportError

If tqdm is not installed.

Examples:

from idfkit.simulation import simulate
from idfkit.simulation.progress_bars import tqdm_progress

with tqdm_progress(desc="Annual run") as cb:
    result = simulate(model, "weather.epw", annual=True, on_progress=cb)

simulate(model, weather, *, output_dir=None, energyplus=None, expand_objects=True, annual=False, design_day=False, output_prefix='eplus', output_suffix='C', readvars=False, timeout=3600.0, extra_args=None, cache=None, fs=None, on_progress=None)

Run an EnergyPlus simulation.

Creates an isolated run directory, writes the model, and executes EnergyPlus as a subprocess. The caller's model is not mutated.

When expand_objects is True (the default) and the model contains GroundHeatTransfer:Slab:* or GroundHeatTransfer:Basement:* objects, the Slab and/or Basement ground heat-transfer preprocessors are run automatically before simulation. This is equivalent to calling run_slab_preprocessor or run_basement_preprocessor individually, but happens transparently.

Parameters:

Name Type Description Default
model IDFDocument

The EnergyPlus model to simulate.

required
weather str | Path

Path to the weather file (.epw).

required
output_dir str | Path | None

Directory for output files (default: auto temp dir).

None
energyplus EnergyPlusConfig | None

Pre-configured EnergyPlus installation. If None, uses find_energyplus for auto-discovery.

None
expand_objects bool

Run ExpandObjects before simulation. When True, also runs the Slab and Basement ground heat-transfer preprocessors if the model contains the corresponding objects.

True
annual bool

Run annual simulation (-a flag).

False
design_day bool

Run design-day-only simulation (-D flag).

False
output_prefix str

Prefix for output files (default "eplus").

'eplus'
output_suffix Literal['C', 'L', 'D']

Output file naming suffix: "C" for combined table files (default), "L" for legacy separate table files, or "D" for timestamped separate files.

'C'
readvars bool

Run ReadVarsESO after simulation (-r flag).

False
timeout float

Maximum runtime in seconds (default 3600).

3600.0
extra_args list[str] | None

Additional command-line arguments.

None
cache SimulationCache | None

Optional simulation cache for content-hash lookups.

None
fs FileSystem | None

Optional file system backend for storing results on remote storage (e.g., S3). When provided, output_dir is required and specifies the remote destination path. EnergyPlus runs locally in a temp directory; results are then uploaded to output_dir via fs after execution.

Note

The fs parameter handles output storage only. The weather file must be a local path — remote weather files are not automatically downloaded. For cloud workflows, download weather files first using WeatherDownloader or pre-stage them locally before calling simulate().

None
on_progress Callable[[SimulationProgress], Any] | Literal['tqdm'] | None

Optional callback invoked with a SimulationProgress event each time EnergyPlus emits a progress line (warmup iterations, simulation day changes, post-processing steps, etc.). Pass "tqdm" to use a built-in tqdm progress bar (requires pip install idfkit[progress]).

None

Returns:

Type Description
SimulationResult

SimulationResult with paths to output files.

Raises:

Type Description
SimulationError

On timeout, OS error, or missing weather file.

ExpandObjectsError

If a preprocessing step (ExpandObjects, Slab, or Basement) fails during automatic preprocessing.

EnergyPlusNotFoundError

If EnergyPlus cannot be found.