Skip to content

Logging

idfkit uses Python's standard logging module throughout the library. Every module logs through a namespaced logger (e.g. idfkit.idf_parser, idfkit.simulation.runner), so you can control verbosity per subsystem using normal Python logging configuration.

By default, no output is produced — idfkit never calls logging.basicConfig() or installs handlers, following the library logging best practice. You opt in to log output by configuring handlers in your own application.

Quick Start

The simplest way to see idfkit logs is to enable basicConfig:

import logging

# Show all INFO-level messages from idfkit
logging.basicConfig(level=logging.INFO)

# Load and parse — idfkit logs progress automatically
from idfkit import load_idf

doc = load_idf("model.idf")
# INFO:idfkit.idf_parser:Parsed 850 objects from model.idf in 0.142s

Log Levels

idfkit uses three log levels:

Level What Gets Logged
DEBUG Diagnostic details — file paths, version detection, mmap usage, cache key computation, command lines, candidate paths
INFO Operational milestones — parsing/writing completion with timing, simulation start/finish, batch progress, schema loading, weather downloads
WARNING Potential problems — unknown object types skipped during parsing, non-zero EnergyPlus exit codes

Errors are raised as exceptions (not logged), so there is no ERROR-level output.

Logger Hierarchy

All loggers live under the idfkit namespace:

idfkit
├── idfkit.schema              # Schema loading and caching
├── idfkit.idf_parser          # IDF file parsing
├── idfkit.epjson_parser       # epJSON file parsing
├── idfkit.writers             # IDF/epJSON file writing
├── idfkit.document            # Object add/remove/rename
├── idfkit.validation          # Schema validation
├── idfkit.geometry            # Geometry operations (WWR, intersect_match)
├── idfkit.simulation
│   ├── idfkit.simulation.config       # EnergyPlus discovery
│   ├── idfkit.simulation.runner       # Simulation execution
│   ├── idfkit.simulation.async_runner # Async simulation
│   ├── idfkit.simulation.batch        # Batch processing
│   ├── idfkit.simulation.async_batch  # Async batch processing
│   ├── idfkit.simulation.cache        # Simulation result caching
│   └── idfkit.simulation.expand       # Preprocessors (ExpandObjects, Slab, Basement)
└── idfkit.weather
    ├── idfkit.weather.download        # EPW/DDY downloads
    └── idfkit.weather.index           # Station index loading

Setting a level on a parent logger applies to all children. For example, logging.getLogger("idfkit.simulation").setLevel(logging.DEBUG) enables debug output for the runner, cache, batch, and all other simulation loggers.

Targeted Logging

Enable verbose output only for the subsystems you care about:

import logging

# Silence everything by default
logging.basicConfig(level=logging.WARNING)

# Enable verbose output only for simulation
logging.getLogger("idfkit.simulation.runner").setLevel(logging.DEBUG)

# Or enable all simulation sub-loggers at once
logging.getLogger("idfkit.simulation").setLevel(logging.DEBUG)

Logging to a File

Write all idfkit output to a log file while keeping your console clean:

import logging

# Create a file handler for idfkit logs
handler = logging.FileHandler("idfkit.log")
handler.setLevel(logging.DEBUG)
handler.setFormatter(logging.Formatter("%(asctime)s %(name)s %(levelname)s %(message)s"))

# Attach it to the top-level idfkit logger
logging.getLogger("idfkit").addHandler(handler)

Debugging Simulations

When a simulation fails or produces unexpected results, enable DEBUG on the simulation subsystem to see every step — EnergyPlus discovery, cache lookups, the exact command line, and timing:

import logging

# Turn on DEBUG for the simulation subsystem
logging.basicConfig(
    level=logging.WARNING,
    format="%(asctime)s %(name)s %(levelname)s %(message)s",
)
logging.getLogger("idfkit.simulation").setLevel(logging.DEBUG)

# Now run a simulation — every step is visible:
#   DEBUG idfkit.simulation.config  Trying candidate /usr/local/EnergyPlus-24-2-0
#   INFO  idfkit.simulation.config  Found EnergyPlus 24.2.0 at /usr/local/EnergyPlus-24-2-0
#   DEBUG idfkit.simulation.runner  EnergyPlus: /usr/local/.../energyplus (version 24.2.0)
#   INFO  idfkit.simulation.runner  Starting simulation with weather weather.epw
#   DEBUG idfkit.simulation.runner  Cache miss for key a1b2c3d4e5f6
#   DEBUG idfkit.simulation.runner  Command: /usr/local/.../energyplus -w weather.epw -d ...
#   INFO  idfkit.simulation.runner  Simulation completed successfully in 12.3s

Integrating with Your Application

idfkit's loggers coexist naturally with your own application logging. Configure them together:

import logging

# Your application's own logger
app_logger = logging.getLogger("myapp")

# Configure the root logger once (controls everything)
logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
)

# Fine-tune individual libraries
logging.getLogger("idfkit").setLevel(logging.INFO)
logging.getLogger("idfkit.simulation").setLevel(logging.DEBUG)
logging.getLogger("urllib3").setLevel(logging.WARNING)

app_logger.info("Starting energy model workflow")
# Both your logs and idfkit logs flow through the same handlers

Django

Use Django's LOGGING dict to configure idfkit loggers:

# In your Django settings.py
LOGGING = {
    "version": 1,
    "disable_existing_loggers": False,
    "handlers": {
        "console": {
            "class": "logging.StreamHandler",
        },
    },
    "loggers": {
        "idfkit": {
            "handlers": ["console"],
            "level": "INFO",
        },
        "idfkit.simulation": {
            "handlers": ["console"],
            "level": "DEBUG",
            "propagate": False,
        },
    },
}

Structured / JSON Logging

For production environments or log aggregation pipelines, attach a custom formatter to emit structured output:

import logging
import json
from datetime import datetime, timezone


class JSONFormatter(logging.Formatter):
    """Emit each log record as a single JSON line."""

    def format(self, record: logging.LogRecord) -> str:
        return json.dumps({
            "ts": datetime.fromtimestamp(record.created, tz=timezone.utc).isoformat(),
            "level": record.levelname,
            "logger": record.name,
            "message": record.getMessage(),
        })


handler = logging.StreamHandler()
handler.setFormatter(JSONFormatter())

logging.getLogger("idfkit").addHandler(handler)
logging.getLogger("idfkit").setLevel(logging.INFO)

Silencing All Output

Suppress all idfkit logging entirely:

import logging

# Suppress all idfkit output
logging.getLogger("idfkit").setLevel(logging.CRITICAL)

Key Log Messages

Here are some representative messages you will see at each level:

INFO

Logger Message
idfkit.idf_parser Parsed 850 objects from model.idf in 0.142s
idfkit.schema Loaded schema for version 24.1.0 (480 object types) in 0.523s
idfkit.simulation.runner Starting simulation with weather weather.epw
idfkit.simulation.runner Simulation completed successfully in 12.3s
idfkit.simulation.batch Starting batch of 50 jobs with 8 workers
idfkit.validation Validation complete: 0 errors, 2 warnings, 1 info
idfkit.weather.download Downloading weather data for Chicago-OHare (WMO 725300)

DEBUG

Logger Message
idfkit.simulation.runner Cache miss for key a1b2c3d4e5f6
idfkit.simulation.runner Command: /usr/local/.../energyplus -w weather.epw -d ...
idfkit.simulation.config Trying candidate /usr/local/EnergyPlus-24-2-0
idfkit.simulation.cache Computed cache key a1b2c3d4e5f6
idfkit.idf_parser Using mmap for large file (52428800 bytes)
idfkit.document Added Zone 'Office'

WARNING

Logger Message
idfkit.idf_parser Skipping unknown object type 'FooBar'
idfkit.simulation.runner Simulation exited with code 1 in 5.2s

See Also