chump.objects.dataobjects.tseries

read time series data.

source

set the file path of this data.

sheet

sheet set the sheet name when reading data from an Excel workbook. Default is None.

valcol: str | list

a string or a list of string defining the value columns in the data source.

Example:

>>> valcol = 'Simulated'
>>> valcol = ['Observed', 'Simulated']
plottype

set the plotting type of the time series, must be 'bar' or 'line'. Default is 'line'.

Example:

>>> plottype = 'bar'
plotarg

plotting argument for the object

plotargs: dict

a dictionary to set additonal plotting arguments for different data columns.

Example:

>>> plotargs = {Observed={color='b'}, Simulated={color='r'}}
location

a dictionary containing a geospatial data object shp or CSV and table to define the locations of the plotting site/well.

Example:

>>> location = {shp='huc12'}
nmin

minimum number of value entry. For sites/wells with observation counts less than this number will be excluded. Default is 0.

Example:

>>> nmin = 5
stack

a list of idcol and valcol names. if set, it stacks the columns in the table . it can be used for MF6 OBS outputs

Example:

>>> stack = ['WellName', 'SimulatedHead']
aggfunc

apply aggregation function based on the site/well id. Default is None.

Example:

>>> aggfunc = 'sum'
a_scale

apply scaling value a_scale to all values.

a_offset

apply offset value a_offset to all values.

t_scale

set a scaling factor, default is 1.0. final time values = original time values * t_scale + t_offset

Example:

>>> t_scale = 86400 # convert days to seconds
t_offset

set a time offset, default is 0.0. final time values = original time values * t_scale + t_offset

Example:

>>> t_offset = -365 # move time ahead by one year
limits

drop values not in limits.

Example:

>>> limits = [1000, 5000]
abslimits

drop the absolute values not in abslimits.

Example:

>>> abslimits = [1, 1e10]
mintime

Filter data with time larger than or euqal to mintime.

maxtime

Filter data with time smaller than or euqal to maxtime.

tfunc

apply a function over time.

Example:

>>> tfunc = 'mean'
calculate the mean values over time
resample

resample data to other time frequency. see https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.resample.html.

Example:

>>> resample = {rule='7D', func='mean'}
add

add values of another object.

Example:

>>> add = {other={csv='obslevel'}, fill_value=0}
subtract

subtract values of another object.

Example:

>>> subtract = {other={csv='obslevel'}, fill_value=0}
mul

multiply values of another object.

Example:

>>> mul = {other={csv='obslevel'}, fill_value=1}
div

divide values of another object.

Example:

>>> div = {other={csv='obslevel'}, fill_value=1}
calculate

set the object as a result of a mathmatic expression. The expression will be evaluated by the Python's eval function.

Examples:

>>> [mfbin.simdiff]
>>> calculate = '(simheadA - simHeadB) - (simheadC - simHeadD)'

where simheadA, simheadB, simheadC and simheadD are names of other mfbin objects.

extfunc

call a function in an external Python script after reading the data. extfunc sets the file name of the script. This script needs to be placed in the working directory. The function name must be "extfunc" and the first argument is the object. The dataframe of the object can be accessed through obj.dat where obj is the argument name in your function.

Example:

>>> # code inside the external script
>>> def extfunc(obj):
>>>     # extract results for the final 12 month
>>>     obj.dat = obj.dat.iloc[-12:]
writedata

set writedata to true to export the data. The data will saved as a CSV or SHP file.

writefunc

call a function in an external Python script to write data. writefunc sets the file name of the script. This script needs to be placed in the working directory. The function name must be "writefunc" and the first argument is the object. The dataframe of the object can be accessed through obj.dat where obj is the argument name in your function.

Example:

>>> # code inside the external script
>>> import numpy as np
>>> def writefunc(obj):
>>>     # write head results as arrray
>>>     nrow = 50
>>>     ncol = 100
>>>     for (time,layer), r in obj.dat.iterrows():
>>>         np.savetxt(f'time{time}_layer{layer}.dat', r.reshape([nrow, ncol]))