Skip to content

EventCountPhenotype

Bases: Phenotype

EventCountPhenotype counts the number of events that occur on distinct days. It is additionally able to filter patients based on: 1. the number of distinct days an event occurred, by setting value_filter 2. the number of days between pairs of events

EventCountPhenotype is a composite phenotype, meaning that it does not directly operate on source data and takes a phenotype as an argument. The phenotype passed to EventCountPhenotype must have return_date set to 'all' (if return_date on the provided phenotype is set to first or last, there will only be one event per patient...)

DATE: The event date selected based on component_date_select and return_date parameters. return_date returns multiple rows per patient for all events that fulfill criteria. return_date first is the first fulfilling event date, last the last. If component_date_select = 'first' the returned date is a pair of events, if component_date_select = 'second' we return the second of a pair of events. VALUE: The number of days that the phenotype of interest has occurred i.e. if 4, that means the phenotype has occurred on 4 distinct days.

Parameters:

Name Type Description Default
name

The name of the phenotype.

required
phenotype Phenotype

The phenotype that returns events of interest. Note that return_date must be set to all or an error will be thrown.

required
value_filter ValueFilter

Set the minimum and/or maximum number of distinct days on which an event may occur.

None
relative_time_range RelativeTimeRangeFilter

Set the minimum and/or maximum number of days that are allowed to occur between any pair of events.

None
return_date

Specifies whether to return the 'first', 'last', or 'all' dates on which the criteria are fulfilled. Default is 'first'.

'first'
component_date_select

Specifies whether to return the 'first' or 'second' event date within each pair of events. Default is 'second'. It is highly recommended to never use 'first', as there is a high risk of introducing immortal time bias.

'second'
Example
codelist = Codelist(name="example_codelist", codes=[...])

phenotype = CodelistPhenotype(
    name="example_phenotype",
    domain="CONDITION_OCCURRENCE",
    codelist=codelist,
    return_date='first'
)

tables = {"CONDITION_OCCURRENCE": example_code_table}
multiple_occurrences = EventCountPhenotype(
    phenotype=phenotype,
    value_filter=ValueFilter(min_value=GreaterThanOrEqualTo(2)),
    relative_time_range=RelativeTimeRangeFilter(
        min_days=GreaterThanOrEqualTo(90),
        max_days=LessThanOrEqualTo(180)
    ),
    return_date='first',
    component_date_select='second'
)

result_table = multiple_occurrences.execute(tables)
display(result_table)
Source code in phenex/phenotypes/event_count_phenotype.py
class EventCountPhenotype(Phenotype):
    """
    EventCountPhenotype counts the number of events that occur on distinct days. It is additionally able to filter patients based on:
    1. the number of distinct days an event occurred, by setting value_filter
    2. the number of days between pairs of events

    EventCountPhenotype is a composite phenotype, meaning that it does not directly operate on source data and takes a phenotype as an argument. The phenotype passed to EventCountPhenotype must have return_date set to 'all' (if return_date on the provided phenotype is set to `first` or `last`, there will only be one event per patient...)


    DATE: The event date selected based on `component_date_select` and `return_date` parameters. `return_date` returns multiple rows per patient for all events that fulfill criteria. `return_date` first is the first fulfilling event date, last the last. If component_date_select = 'first' the returned date is a pair of events, if component_date_select = 'second' we return the second of a pair of events.
    VALUE: The number of days that the phenotype of interest has occurred i.e. if 4, that means the phenotype has occurred on 4 distinct days.

    Parameters:
        name: The name of the phenotype.
        phenotype: The phenotype that returns events of interest. Note that return_date must be set to `all` or an error will be thrown.
        value_filter: Set the minimum and/or maximum number of distinct days on which an event may occur.
        relative_time_range: Set the minimum and/or maximum number of days that are allowed to occur between any pair of events.
        return_date: Specifies whether to return the 'first', 'last', or 'all' dates on which the criteria are fulfilled. Default is 'first'.
        component_date_select: Specifies whether to return the 'first' or 'second' event date within each pair of events. Default is 'second'. It is highly recommended to never use 'first', as there is a high risk of introducing immortal time bias.

    Example:
        ```python
        codelist = Codelist(name="example_codelist", codes=[...])

        phenotype = CodelistPhenotype(
            name="example_phenotype",
            domain="CONDITION_OCCURRENCE",
            codelist=codelist,
            return_date='first'
        )

        tables = {"CONDITION_OCCURRENCE": example_code_table}
        multiple_occurrences = EventCountPhenotype(
            phenotype=phenotype,
            value_filter=ValueFilter(min_value=GreaterThanOrEqualTo(2)),
            relative_time_range=RelativeTimeRangeFilter(
                min_days=GreaterThanOrEqualTo(90),
                max_days=LessThanOrEqualTo(180)
            ),
            return_date='first',
            component_date_select='second'
        )

        result_table = multiple_occurrences.execute(tables)
        display(result_table)
        ```
    """

    def __init__(
        self,
        phenotype: Phenotype,
        value_filter: ValueFilter = None,
        relative_time_range: RelativeTimeRangeFilter = None,
        return_date="first",
        component_date_select="second",
        **kwargs,
    ):
        super(EventCountPhenotype, self).__init__(**kwargs)
        self.relative_time_range = relative_time_range
        self.return_date = return_date
        self.component_date_select = component_date_select
        if self.component_date_select not in ["first", "second"]:
            raise ValueError(
                f"Invalid component_date_select: {self.component_date_select}"
            )
        self.value_filter = value_filter
        self.phenotype = phenotype
        self.add_children(phenotype)

    def _execute(self, tables) -> PhenotypeTable:
        # Execute the child phenotype to get the initial table to filter
        if self.phenotype.return_date != "all":
            raise ValueError(
                "EventCountPhenotype requires that return_date is set to all on its component phenotype"
            )
        table = self.phenotype.table

        # Select only distinct dates:
        table = table.select(["PERSON_ID", "EVENT_DATE"]).distinct()

        # Count occurrences per PERSON_ID
        occurrence_counts_table = table.group_by("PERSON_ID").aggregate(VALUE=_.count())
        table, occurrence_counts_table = self._perform_value_filtering(
            table, occurrence_counts_table
        )
        table = self._perform_relative_time_range_filtering(table)
        table = self._perform_date_selection(table)
        table = table.left_join(
            occurrence_counts_table.select("PERSON_ID", "VALUE"),
            table.PERSON_ID == occurrence_counts_table.PERSON_ID,
        ).select("PERSON_ID", "EVENT_DATE", "VALUE")

        table = table.mutate(BOOLEAN=True).distinct()
        return table

    def _perform_value_filtering(self, table, occurrence_counts_table):
        if self.value_filter is not None:
            occurrence_counts_table = self.value_filter.filter(occurrence_counts_table)
            table = table.right_join(
                occurrence_counts_table,
                table.PERSON_ID == occurrence_counts_table.PERSON_ID,
            ).select(["PERSON_ID", "EVENT_DATE", "VALUE"])
        return table, occurrence_counts_table

    def _perform_relative_time_range_filtering(self, table):
        if self.relative_time_range is not None:
            # make sure that the 'when' keyword parameter is correctly set to after
            self.relative_time_range.when = "after"
            # Self join and rename event_date columns;
            # the first dates will be called INDEX_DATE
            # the second dates will be called EVENT_DATE
            first_table = table.select(
                "PERSON_ID",
                table.EVENT_DATE.name("INDEX_DATE"),
            )
            second_table = table.select(
                "PERSON_ID",
                table.EVENT_DATE.name("EVENT_DATE"),
            )
            table = first_table.join(
                second_table, first_table.PERSON_ID == second_table.PERSON_ID
            )

            table = table.filter(table.INDEX_DATE <= table.EVENT_DATE)
            # perform relative time range filtering; the first date is the anchor ('index_date')
            table = self.relative_time_range.filter(table)

            if self.component_date_select == "first":
                table = table.select("PERSON_ID", "INDEX_DATE").rename(
                    {"EVENT_DATE": "INDEX_DATE"}
                )
            elif self.component_date_select == "second":
                table = table.select("PERSON_ID", "EVENT_DATE")
        return table

    def _perform_date_selection(self, table, reduce=True):
        if self.return_date is None or self.return_date == "all":
            return table
        if self.return_date == "first":
            aggregator = First(reduce=reduce)
        elif self.return_date == "last":
            aggregator = Last(reduce=reduce)
        else:
            raise ValueError(f"Unknown return_date: {self.return_date}")
        table = aggregator.aggregate(table)
        return table.select("PERSON_ID", "EVENT_DATE")

dependencies property

Recursively collect all dependencies of a node (including dependencies of dependencies).

Returns:

Type Description
Set[Node]

List[Node]: A list of Node objects on which this Node depends.

dependency_graph property

Build a dependency graph where each node maps to its direct dependencies (children).

Returns:

Type Description
Dict[Node, Set[Node]]

Dict[Node, Set[Node]: A mapping of Node's to their children Node's.

execution_metadata property

Retrieve the full execution metadata row for this node from the local DuckDB database.

Returns:

Type Description

pandas.DataFrame: A table containing NODE_NAME, NODE_HASH, NODE_PARAMS, EXECUTION_PARAMS, EXECUTION_START_TIME, EXECUTION_END_TIME, and EXECUTION_DURATION for execution of this node, or None if the node has never been executed.

namespaced_table property

A PhenotypeTable has generic column names 'person_id', 'boolean', 'event_date', and 'value'. The namespaced_table prepends the phenotype name to all of these columns. This is useful when joining multiple phenotype tables together.

Returns:

Name Type Description
table Table

The namespaced table for the current phenotype.

reverse_dependency_graph property

Build a reverse dependency graph where each node maps to nodes that depend on it (parents).

Returns:

Type Description
Dict[Node, Set[Node]]

Dict[Node, List[Node]: A mapping of Node's to their parent Node's.

clear_cache(con=None, recursive=False)

Clear the cached state for this node, forcing re-execution on the next call to execute().

This method removes the node's hash from the node states table and optionally drops the materialized table from the database. After calling this method, the node will be treated as if it has never been executed before.

Parameters:

Name Type Description Default
con Optional[object]

Database connector. If provided, clears only runs with matching execution context and drops the materialized table. If None, clears all runs for the node.

None
recursive bool

If True, also clear the cache for all child nodes recursively. Defaults to False.

False
Example
# Clear all cached runs for a single node
my_node.clear_cache()

# Clear runs with specific execution context and drop materialized table
my_node.clear_cache(con=my_connector)

# Clear cache for node and all its dependencies
my_node.clear_cache(recursive=True)
Source code in phenex/node.py
def clear_cache(self, con: Optional[object] = None, recursive: bool = False):
    """
    Clear the cached state for this node, forcing re-execution on the next call to execute().

    This method removes the node's hash from the node states table and optionally drops the materialized table from the database. After calling this method, the node will be treated as if it has never been executed before.

    Parameters:
        con: Database connector. If provided, clears only runs with matching execution context and drops the materialized table. If None, clears all runs for the node.
        recursive: If True, also clear the cache for all child nodes recursively. Defaults to False.

    Example:
        ```python
        # Clear all cached runs for a single node
        my_node.clear_cache()

        # Clear runs with specific execution context and drop materialized table
        my_node.clear_cache(con=my_connector)

        # Clear cache for node and all its dependencies
        my_node.clear_cache(recursive=True)
        ```
    """
    # Delegate all logic to NodeManager
    return Node._node_manager.clear_cache(self, con=con, recursive=recursive)

execute(tables=None, con=None, overwrite=False, lazy_execution=False, n_threads=1)

Executes the Node computation for the current node and its dependencies.

Lazy Execution

When lazy_execution=True, nodes are only recomputed if changes are detected. The system tracks: 1. Node definition changes: Detected by hashing the node's parameters (from to_dict()) and class name 2. Execution environment changes: Detected by tracking source/destination database configurations

A node will be rerun if either: - The node's defining parameters have changed (different hash than last execution) - The database connector's source or destination databases have changed - The node has never been executed before

If no changes are detected, the node uses its cached result from the database instead of recomputing.

Requirements for lazy execution: - A database connector (con) must be provided to store and retrieve cached results - overwrite=True must be set to allow updating existing cached tables

State tracking is maintained in a local DuckDB database (__PHENEX_META__NODE_STATES table) that stores: - Node hashes, parameters, and execution metadata - Database connector configuration used during execution - Execution timing information

Parameters:

Name Type Description Default
tables Dict[str, Table]

A dictionary mapping domains to Table objects.

None
con Optional[object]

Connection to database for materializing outputs. If provided, outputs from the node and all children nodes will be materialized (written) to the database using the connector. Required for lazy_execution.

None
overwrite bool

If True, will overwrite any existing tables found in the database while writing. If False, will throw an error when an existing table is found. Has no effect if con is not passed. Must be True when using lazy_execution.

False
lazy_execution bool

If True, only re-executes nodes when changes are detected in either the node definition or execution environment. Defaults to False. Requires con to be provided.

False
n_threads int

Max number of Node's to execute simultaneously when this node has multiple children.

1

Returns:

Name Type Description
Table Table

The resulting table for this node. Also accessible through self.table after calling self.execute().

Raises:

Type Description
ValueError

If lazy_execution=True but overwrite=False or con=None.

Source code in phenex/node.py
def execute(
    self,
    tables: Dict[str, Table] = None,
    con: Optional[object] = None,
    overwrite: bool = False,
    lazy_execution: bool = False,
    n_threads: int = 1,
) -> Table:
    """
    Executes the Node computation for the current node and its dependencies.

    Lazy Execution:
        When lazy_execution=True, nodes are only recomputed if changes are detected. The system tracks:
        1. Node definition changes: Detected by hashing the node's parameters (from to_dict()) and class name
        2. Execution environment changes: Detected by tracking source/destination database configurations

        A node will be rerun if either:
        - The node's defining parameters have changed (different hash than last execution)
        - The database connector's source or destination databases have changed
        - The node has never been executed before

        If no changes are detected, the node uses its cached result from the database instead of recomputing.

        Requirements for lazy execution:
        - A database connector (con) must be provided to store and retrieve cached results
        - overwrite=True must be set to allow updating existing cached tables

        State tracking is maintained in a local DuckDB database (__PHENEX_META__NODE_STATES table) that stores:
        - Node hashes, parameters, and execution metadata
        - Database connector configuration used during execution
        - Execution timing information

    Parameters:
        tables: A dictionary mapping domains to Table objects.
        con: Connection to database for materializing outputs. If provided, outputs from the node and all children nodes will be materialized (written) to the database using the connector. Required for lazy_execution.
        overwrite: If True, will overwrite any existing tables found in the database while writing. If False, will throw an error when an existing table is found. Has no effect if con is not passed. Must be True when using lazy_execution.
        lazy_execution: If True, only re-executes nodes when changes are detected in either the node definition or execution environment. Defaults to False. Requires con to be provided.
        n_threads: Max number of Node's to execute simultaneously when this node has multiple children.

    Returns:
        Table: The resulting table for this node. Also accessible through self.table after calling self.execute().

    Raises:
        ValueError: If lazy_execution=True but overwrite=False or con=None.
    """
    # Handle None tables
    if tables is None:
        tables = {}

    # Build dependency graph for all dependencies
    all_deps = self.dependencies
    nodes = {node.name: node for node in all_deps}
    nodes[self.name] = self  # Add self to the nodes

    # Build dependency and reverse graphs
    dependency_graph = self._build_dependency_graph(nodes)
    reverse_graph = self._build_reverse_graph(dependency_graph)

    # Track completion status and results
    completed = set()
    completion_lock = threading.Lock()
    worker_exceptions = []  # Track exceptions from worker threads
    stop_all_workers = (
        threading.Event()
    )  # Signal to stop all workers on first error

    # Track in-degree for scheduling
    in_degree = {}
    for node_name, dependencies in dependency_graph.items():
        in_degree[node_name] = len(dependencies)
    for node_name in nodes:
        if node_name not in in_degree:
            in_degree[node_name] = 0

    # Queue for nodes ready to execute
    ready_queue = queue.Queue()

    # Add nodes with no dependencies to ready queue
    for node_name, degree in in_degree.items():
        if degree == 0:
            ready_queue.put(node_name)

    def worker():
        """Worker function for thread pool"""
        while not stop_all_workers.is_set():
            try:
                node_name = ready_queue.get(timeout=1)
                # timeout forces to wait 1 second to avoid busy waiting
                if node_name is None:  # Sentinel value to stop worker
                    break
            except queue.Empty:
                continue

            try:
                logger.info(
                    f"Thread {threading.current_thread().name}: executing node '{node_name}'"
                )
                node = nodes[node_name]

                # Execute the node (without recursive child execution since we handle dependencies here)
                if lazy_execution:
                    if not overwrite:
                        raise ValueError(
                            "lazy_execution only works with overwrite=True."
                        )
                    if con is None:
                        raise ValueError(
                            "A DatabaseConnector is required for lazy execution."
                        )

                    if Node._node_manager.should_rerun(node, con):
                        # Time the execution
                        node.lastexecution_start_time = datetime.now()
                        table = node._execute(tables)

                        if (
                            table is not None
                        ):  # Only create table if _execute returns something
                            con.create_table(table, node_name, overwrite=overwrite)
                            table = con.get_dest_table(node_name)

                        node.lastexecution_end_time = datetime.now()
                        node.lastexecution_duration = (
                            node.lastexecution_end_time
                            - node.lastexecution_start_time
                        ).total_seconds()

                        Node._node_manager.update_run_params(node, con)
                    else:
                        table = con.get_dest_table(node_name)
                else:
                    # Time the execution
                    node.lastexecution_start_time = datetime.now()
                    table = node._execute(tables)

                    if (
                        con and table is not None
                    ):  # Only create table if _execute returns something
                        con.create_table(table, node_name, overwrite=overwrite)
                        table = con.get_dest_table(node_name)

                    node.lastexecution_end_time = datetime.now()
                    node.lastexecution_duration = (
                        node.lastexecution_end_time - node.lastexecution_start_time
                    ).total_seconds()

                node.table = table

                with completion_lock:
                    completed.add(node_name)

                    # Update in-degree for dependent nodes and add ready ones to queue
                    for dependent in reverse_graph.get(node_name, set()):
                        in_degree[dependent] -= 1
                        if in_degree[dependent] == 0:
                            # Check if all dependencies are completed
                            deps_completed = all(
                                dep in completed
                                for dep in dependency_graph.get(dependent, set())
                            )
                            if deps_completed:
                                ready_queue.put(dependent)

                # Log completion with timing info
                if node.lastexecution_duration is not None:
                    logger.info(
                        f"Thread {threading.current_thread().name}: completed node '{node_name}' "
                        f"in {node.lastexecution_duration:.3f} seconds"
                    )
                else:
                    logger.info(
                        f"Thread {threading.current_thread().name}: completed node '{node_name}' (cached)"
                    )

            except Exception as e:
                logger.error(f"Error executing node '{node_name}': {str(e)}")
                with completion_lock:
                    # Store exception for main thread
                    worker_exceptions.append(e)
                    # Signal all workers to stop immediately and exit worker loop
                    stop_all_workers.set()
                    break
            finally:
                ready_queue.task_done()

    # Start worker threads
    threads = []
    for i in range(min(n_threads, len(nodes))):
        thread = threading.Thread(target=worker, name=f"PhenexWorker-{i}")
        thread.daemon = True
        thread.start()
        threads.append(thread)

    # Wait for all nodes to complete or for an error to occur
    while (
        len(completed) < len(nodes)
        and not worker_exceptions
        and not stop_all_workers.is_set()
    ):
        threading.Event().wait(0.1)  # Small delay to prevent busy waiting

    if not stop_all_workers.is_set():
        # Time to stop workers and cleanup
        stop_all_workers.set()

    # Check if any worker thread had an exception
    if worker_exceptions:
        # Signal workers to stop
        for _ in threads:
            ready_queue.put(None)
        # Wait for threads to finish
        for thread in threads:
            thread.join(timeout=1)
        # Re-raise the first exception
        raise worker_exceptions[0]

    # Signal workers to stop and wait for them
    for _ in threads:
        ready_queue.put(None)  # Sentinel value to stop workers

    for thread in threads:
        thread.join(timeout=1)

    logger.info(
        f"Node '{self.name}': completed multithreaded execution of {len(nodes)} nodes"
    )
    return self.table

visualize_dependencies()

Create a text visualization of the dependency graph for this node and its dependencies.

Returns:

Name Type Description
str str

A text representation of the dependency graph

Source code in phenex/node.py
def visualize_dependencies(self) -> str:
    """
    Create a text visualization of the dependency graph for this node and its dependencies.

    Returns:
        str: A text representation of the dependency graph
    """
    lines = [f"Dependencies for Node '{self.name}':"]

    # Get all dependencies
    all_deps = self.dependencies
    nodes = {node.name: node for node in all_deps}
    nodes[self.name] = self  # Add self to the nodes

    # Build dependency graph
    dependency_graph = self._build_dependency_graph(nodes)

    for node_name in sorted(nodes.keys()):
        dependencies = dependency_graph.get(node_name, set())
        if dependencies:
            deps_str = ", ".join(sorted(dependencies))
            lines.append(f"  {node_name} depends on: {deps_str}")
        else:
            lines.append(f"  {node_name} (no dependencies)")

    return "\n".join(lines)