foundryts.nodes.FunctionNode

class foundryts.nodes.FunctionNode(children)

Lazy query container for transforming one or more timeseries to a new timeseries which is the output of the supplied transformation function.

Each FunctionNode can be transformed to another FunctionNode or computed to a final SummarizerNode.

You can also resolve a lazy FunctionNode to a dataframe with FunctionNode.to_pandas() or FunctionNode.to_dataframe() which will yield the transformed time series in the form of a dataframe.

Examples

Copied!
1 2 3 4 5 6 7 8 9 10 11 12 13 >>> series = F.points( ... (100, 0.0), ... (200, float("inf")), ... (300, 3.14159), ... (2147483647, 1.0), ... name="series" ... ) >>> series.to_pandas() timestamp value 0 1970-01-01 00:00:00.000000100 0.00000 1 1970-01-01 00:00:00.000000200 inf 2 1970-01-01 00:00:00.000000300 3.14159 3 1970-01-01 00:00:02.147483647 1.00000
Copied!
1 2 3 4 5 6 7 8 9 10 11 >>> scaled = series.scale(1.5) # scaled is a FunctionNode that is not evaluated yet # scaled can be chained to another FunctionNode operation resulting in another unevaluated FunctionNode >>> time_shifted = scaled.time_shift(1000) # converting time_shifted to a Pandas dataframe evaluates the lazy query with the output of the scaled and # time_shifted functions >>> time_shifted.to_pandas() timestamp value 0 1970-01-01 00:00:00.000001100 0.000000 1 1970-01-01 00:00:00.000001200 inf 2 1970-01-01 00:00:00.000001300 4.712385 3 1970-01-01 00:00:02.147484647 1.500000

columns()

Returns a tuple of strings representing the column names of the pandas.DataFrame that would be produced by evaluating this node to a pandas dataframe.

Note

Keys of nested objects will be flattened into a tuple with nested keys joined with ..

  • Returns: Tuple containing names of the columns in the resulting dataframe which the current node gets evaluated to.
  • Return type: Tuple[str]

Examples

Copied!
1 2 3 4 5 6 7 >>> series_node = foundryts.functions.points(((100, 0.0), (200, 1.0)) >>> series_node.columns() ("timestamp", "value") >>> stats_node = series_node.statistics(start=0, end=100, window_size=None) >>> stats_node.columns() ("count", "smallest_point.timestamp", "start_timestamp", "latest_point.timestamp", "mean", "earliest_point.timestamp", "largest_point.timestamp", "end_timestamp")

cumulative_aggregate(*args, **kwargs)

See foundryts.functions.cumulative_aggregate()

derivative()

See foundryts.functions.derivative()

distribution(start=None, end=None, bins=None, start_value=None, end_value=None)

See foundryts.functions.distribution()

dsl(program, return_type, labels=None, before='nearest', internal='linear', after='nearest')

See foundryts.functions.dsl()

first_point()

See foundryts.functions.first_point()

integral(method='LINEAR')

See foundryts.functions.integral()

interpolate(before=None, internal=None, after=None, frequency=None, rename_columns_by=None, static_column_name=None)

See foundryts.functions.interpolate()

last_point()

See foundryts.functions.last_point()

mean(children)

See foundryts.functions.mean()

periodic_aggregate(*args, **kwargs)

See foundryts.functions.periodic_aggregate()

rolling_aggregate(*args, **kwargs)

See foundryts.functions.rolling_aggregate()

scale(factor)

See foundryts.functions.scale()

scatter(start_timestamp, end_timestamp, first_interpolation, second_interpolation, regression_fit)

See foundryts.functions.scatter()

property series_ids

All series identifiers used by this node and its child nodes.

skip_nonfinite()

See foundryts.functions.skip_nonfinite()

statistics(start=None, end=None, window=None, **kwargs)

See foundryts.functions.statistics()

sum(children)

See foundryts.functions.sum()

time_extent()

See foundryts.functions.time_extent()

time_range(start=None, end=None)

See foundryts.functions.time_range()

time_shift(duration)

See foundryts.functions.time_shift()

to_dataframe(fts=None)

Evaluates this node to a pyspark.sql.DataFrame.

PySpark DataFrames enable distributed data processing and parallelized transformations. They can be useful when working with dataframes with a large number of rows, for example loading all the points in a raw series or the result of a FunctionNode, or evaluating the results of multiple SummarizerNode or FunctionNode together.

  • Parameters: fts (foundryts.FoundryTS , optional) – FoundryTS session used to execute the query (a new session will be created if not provided).
  • Returns: Output of the node evaluated to a PySpark dataframe.
  • Return type: pyspark.sql.DataFrame

Examples

Copied!
1 2 3 4 5 6 7 8 9 10 11 12 >>> series_node = F.points( ... (100, 0.0), (200, float("inf")), (300, 3.14159), (2147483647, 1.0), name="series" ... ) >>> series_node.to_dataframe().show() +-------------------------------+---------+ | timestamp | value | +-------------------------------+---------+ | 1970-01-01 00:00:00.000000100 | 0.0 | | 1970-01-01 00:00:00.000000200 | Infinity| | 1970-01-01 00:00:00.000000300 | 3.14159 | | 1970-01-01 00:00:02.147483647 | 1.0 | +-------------------------------+---------+

to_pandas(fts=None)

Evaluates this node to a pandas.DataFrame.

This is useful for loading raw or transformed time series data into a pandas.DataFrame and performing transformations using operations provided by pandas.DataFrame.

  • Parameters: fts (foundryts.FoundryTS , optional) – FoundryTS session used to execute the query (a new session will be created if not provided).
  • Returns: Output of the node evaluated to a Pandas dataframe.
  • Return type: pd.DataFrame

Examples

Copied!
1 2 3 4 5 6 7 8 9 >>> series = F.points( ... (100, 0.0), (200, float("inf")), (300, 3.14159), (2147483647, 1.0), name="series" ... ) >>> series.to_pandas() timestamp value 0 1970-01-01 00:00:00.000000100 0.00000 1 1970-01-01 00:00:00.000000200 inf 2 1970-01-01 00:00:00.000000300 3.14159 3 1970-01-01 00:00:02.147483647 1.00000

types()

Returns a tuple of types for the column of the pandas.DataFrame that would be produced by evaluating this node to a pandas dataframe.

  • Returns: Tuple containing types of the columns in the resulting dataframe which the current node gets evaluated to.
  • Return type: Tuple[Type]

Examples

Copied!
1 2 3 4 5 6 7 8 9 10 >>> node = foundryts.functions.points() >>> node.types() (<class 'int'>, <class 'float'>) >>> stats_node = node.statistics(start=0, end=100, window_size=None) >>> stats_node.types() (<class 'int'>, <class 'pandas._libs.tslibs.timestamps.Timestamp'>, <class 'float'>, <class 'pandas._libs.tslibs.timestamps.Timestamp'>, <class 'pandas._libs.tslibs.timestamps.Timestamp'>, <class 'float'>, <class 'pandas._libs.tslibs.timestamps.Timestamp'>, <class 'float'>, <class 'float'>, <class 'pandas._libs.tslibs.timestamps.Timestamp'>, <class 'float'>, <class 'pandas._libs.tslibs.timestamps.Timestamp'>)

udf(func, columns=None, types=None)

See foundryts.functions.udf()

unit_conversion(from_unit, to_unit)

See foundryts.functions.unit_conversion()

value_shift(delta)

See foundryts.functions.value_shift()

where(true=None, false=None)

See foundryts.functions.where()