If the time-series data is stored in a table data_table
with the contents like this:
channel | timestamp | value |
---|---|---|
sccm.Alicat.Inj.Gas | 2022-09-15 03:19:25.496212+00 | 0 |
V.ThermoCo.Diss.AS | 2022-09-15 03:19:27.612427+00 | 6.605405e-05 |
mbar.IG.Vac.AS | 2022-09-15 03:19:31.490579+00 | 2.26e-07 |
mbar.IG.Vac.MS | 2022-09-15 03:19:31.529545+00 | 2e-09 |
mbar.IG.Vac.BS | 2022-09-15 03:19:31.610188+00 | 4e-09 |
Then the data source
part of the configuration looks like:
slowdash_project:
data_source:
url: postgresql://USER:PASS@localhost:5432/DBNAME
parameters:
time_series:
schema: data_table[channel]@timestamp=value
Change PostgreSQL
to MySQL
or SQLite
for other SQL DB systems.
If only one tag is used for the channel names and one field for data values (MEAS,channel=CH1 value=VAL1 TIME
):
slowdash_project:
data_source:
url: influxdb://ORGANIZATION@localhost:8086/BUCKET/MEASUREMENT
parameters:
token: TOKEN
All the time-series entries will be taken from a database:
slowdash_project:
data_source:
url: redis://localhost:6739/1
valid_from
attribute in data source description: either a timestamp/datetime, now
(“current”), or creation
. The default is timestamp 0
, meaning “neutral”.valid_from
should allow picking up the “newest” for a given time.
aggregation
parameter in data query: either none
(default for time-series of scalars), last
(default for time-series of objects), or sum
(cumulative histogram / merged graph / appended table).For a table storing time-series data, a “schema descriptor” describes which columns are for timestamps, tas(s) and field(s).
data_table[endpoint_name]@timestamp=value
data_table[endpoint_name,set_or_ist]@timestamp(unix)=value_raw,value_cal
data_table[endpoint_name,set_or_ist]@timestamp(unix)=value_raw(default),value_cal
,
.@
.=
, field column names are listed.
()
. Type specifier is case insensitive."
._
, -
, .
, and :
, for tags, fields, and column names.';drop table datatable
is not allowed, and will be rejected, even though proper handling is technically possible.Note that the numeric data values shown here can be of other scalar types (string etc.) or objects (histogram etc.).
metric | timestamp | value |
---|---|---|
psia.Alicat.Inj.Gas | 2022-09-15 03:19:25.419417+00 | 9.6 |
degC.Alicat.Inj.Gas | 2022-09-15 03:19:25.458695+00 | 23.42 |
sccm.Alicat.Inj.Gas | 2022-09-15 03:19:25.496212+00 | 0 |
V.ThermoCo.Diss.AS | 2022-09-15 03:19:27.612427+00 | 6.605405e-05 |
V.PS.Diss.AS | 2022-09-15 03:19:29.387352+00 | 0.01 |
A.PS.Diss.AS | 2022-09-15 03:19:29.416561+00 | 0 |
mbar.IG.Vac.AS | 2022-09-15 03:19:31.490579+00 | 2.26e-07 |
mbar.IG.Vac.MS | 2022-09-15 03:19:31.529545+00 | 2e-09 |
mbar.IG.Vac.VSS | 2022-09-15 03:19:31.56965+00 | 1.3e-08 |
mbar.IG.Vac.BS | 2022-09-15 03:19:31.610188+00 | 4e-09 |
metric
value
psia.Alicat.Inj.Gas
table[metric]@timestamp(with timezone)=value_raw
table[endpoint]@timestamp(unix)
RunNumber | TimeStamp | sccm.Alicat.Inj | mbar.CC10.Inj | K.ThrmCpl.Diss | mbar.IG.AS |
---|---|---|---|---|---|
3098 | 1664916014 | 3 | 1.18467 | 340.58 | 5.38333e-05 |
3097 | 1664915456 | 3 | 1.256 | 503.275 | 5.36e-05 |
3096 | 1664914833 | 3 | 1.36833 | 745.743 | 5.38333e-05 |
3095 | 1664913608 | 3 | 1.447 | 1154.09 | 5.44e-05 |
3094 | 1664913032 | 3 | 1.48933 | 1501.14 | 5.46667e-05 |
3093 | 1664912407 | 3 | 1.533 | 1799.61 | 5.52667e-05 |
3092 | 1664911835 | 3 | 1.576 | 2060.59 | 5.56e-05 |
3091 | 1664910949 | 0.1 | 0.163633 | 2069.99 | 3.82667e-06 |
3090 | 1664910320 | 0.1 | 0.163533 | 1820.41 | 2.72333e-06 |
3089 | 1664909732 | 0.1 | 0.163533 | 1521.82 | 2.54e-06 |
TimeStamp
sccm.Alicat.Inj
RunTable@TimeStamp(unix)=sccm.Alicat.Inj,mbar.CC10.Inj,K.THrmCpl.Diss,mbar.IG.AS
RunTable@TimeStamp(unix)
metric | timestamp | value_raw | value_cal |
---|---|---|---|
psia.Alicat.Inj.Gas | 2022-09-15 03:19:25.419417+00 | 9.6 | 9.6 |
degC.Alicat.Inj.Gas | 2022-09-15 03:19:25.458695+00 | 23.42 | 23.42 |
sccm.Alicat.Inj.Gas | 2022-09-15 03:19:25.496212+00 | 0 | 0 |
V.ThermoCo.Diss.AS | 2022-09-15 03:19:27.612427+00 | 6.605405e-05 | 6.605405e-05 |
V.PS.Diss.AS | 2022-09-15 03:19:29.387352+00 | 0.01 | 0.01 |
A.PS.Diss.AS | 2022-09-15 03:19:29.416561+00 | 0 | 0 |
mbar.IG.Vac.AS | 2022-09-15 03:19:31.490579+00 | 2.26e-07 | 2.26e-07 |
mbar.IG.Vac.MS | 2022-09-15 03:19:31.529545+00 | 2e-09 | 2e-09 |
mbar.IG.Vac.VSS | 2022-09-15 03:19:31.56965+00 | 1.3e-08 | 1.3e-08 |
mbar.IG.Vac.BS | 2022-09-15 03:19:31.610188+00 | 4e-09 | 4e-09 |
metric
value_raw
, value_cal
sccm.Alicat.Inj:value_cal
table[metric]@timestamp(with timezone)=value_raw,value_cal
table[metric]@timestamp(with timezone)=value_raw(default),value_cal
table[metric]@timestamp(with timezone)
metric | set_or_ist | timestamp | value_raw | value_cal |
---|---|---|---|---|
psia.Alicat.Inj.Gas | ist | 2022-09-15 03:19:25.419417+00 | 9.6 | 9.6 |
degC.Alicat.Inj.Gas | ist | 2022-09-15 03:19:25.458695+00 | 23.42 | 23.42 |
sccm.Alicat.Inj.Gas | ist | 2022-09-15 03:19:25.496212+00 | 0 | 0 |
V.ThermoCo.Diss.AS | ist | 2022-09-15 03:19:27.612427+00 | 6.605405e-05 | 6.605405e-05 |
V.PS.Diss.AS | ist | 2022-09-15 03:19:29.387352+00 | 0.01 | 0.01 |
A.PS.Diss.AS | ist | 2022-09-15 03:19:29.416561+00 | 0 | 0 |
mbar.IG.Vac.AS | ist | 2022-09-15 03:19:31.490579+00 | 2.26e-07 | 2.26e-07 |
mbar.IG.Vac.MS | ist | 2022-09-15 03:19:31.529545+00 | 2e-09 | 2e-09 |
mbar.IG.Vac.VSS | ist | 2022-09-15 03:19:31.56965+00 | 1.3e-08 | 1.3e-08 |
mbar.IG.Vac.BS | ist | 2022-09-15 03:19:31.610188+00 | 4e-09 | 4e-09 |
metric
set_or_ist
value_raw
, value_cal
sccm.Alicat.Inj:value_cal:ist
table[metric,set_or_ist]@timestamp(with timezone)=value_raw,value_cal
DBMS | Python Module | Server-side down sampling |
---|---|---|
PostgreSQL | psycopg2 | yes |
MySQL | mysqlclient | yes |
SQLite | (none) | no |
Others (generic) | sqlalchemy | no |
pip3 install psycopg2
.slowdash_project:
data_source:
type: PostgreSQL
parameters:
url: USER:PASS@HOST:PORT/DB
schema: TABLE [ TAG_COLUMN ] @ TIME_COLUMN
or, with putting the URL with postgres://
prefix, the type
can be omitted:
slowdash_project:
data_source:
url: postgresql://USER:PASS@HOST:PORT/DB
parameters:
schema: TABLE [ TAG_COLUMN ] @ TIME_COLUMN
pip3 install mysqlclient
.slowdash_project:
data_source:
type: MySQL
parameters:
url: USER:PASS@HOST:PORT/DB
schema: TABLE [ TAG_COLUMN ] @ TIME_COLUMN
or
slowdash_project:
data_source:
url: mysql://USER:PASS@HOST:PORT/DB
parameters:
schema: TABLE [ TAG_COLUMN ] @ TIME_COLUMN
slowdash_project:
data_source:
type: SQLite
parameters:
file: FILENAME
or
slowdash_project:
data_source:
url: sqlite:///FILENAME
parameters:
schema: TABLE [ TAG_COLUMN ] @ TIME_COLUMN
sqlalchemy
, also install the Python package to access the DB, as described in the SQLAlchemy page.slowdash_project:
data_source:
type: SQLAlchemy
parameters:
url: DBTYPE://USER:PASS@HOST:PORT/DB
To access a table containing time-series data, write the schema in the time_series
entry:
slowdash_project:
data_source:
type: PostgreSQL
parameters:
url: postgresql://p8_db_user:****@localhost:5432/p8_sc_db
time_series:
schema: numeric_data[endpoint]@timestamp(with timezone)=value_raw
time_series:
schema: numeric_data[endpoint]@timestamp(with timezone)=value_raw
time_series:
schema: numeric_data@timestamp(with timezone)=value_raw,value_cal
time_series:
schema: numeric_data[endpoint]@timestamp(with timezone)=value_raw(default),value_cal
[TODO] Flags are currently not supported. To use the case 4 schemata, use the where
and suffix
options.
time_series:
schema: numeric_data[endpoint]@timestamp(with timezone)=value_raw(default),value_cal
where: ist_or_set='ist'
suffix: ':ist'
object_time_series
entryslowdash_project:
data_source:
type: PostgreSQL
parameters:
url: postgresql://p8_db_user:****@localhost:5432/p8_sc_db
object_time_series:
schema: histograms[channel]@timestamp(unix)=json
[TODO] DB system specific?
slowdash_project:
data_source:
type: SQLite
parameters:
file: RunTable.db
view:
name: RunTable
sql: select * from RunTable where TimeStamp >= ${FROM_UNIXTIME} and TimeStamp < ${TO_UNIXTIME}
${FROM_UNIXTIME}
, ${TO_UNIXTIME}
${FROM_DATETIME}
, ${TO_DATETIME}
${FROM_DATETIME_NAIVE}
, ${TO_DATETIME_NAIVE}
${FROM_DATETIME_UTC}
, ${TO_DATETIME_UTC}
For simple cases, just specify the Measurement name in the time_series
entry:
slowdash_project:
data_source:
type: InfluxDB
parameters:
url: influxdb://SlowDash@localhost:8086/TestData
token: MY_TOKEN_HERE
time_series:
- measurement: TestTimeSeries
or in a short form:
slowdash_project:
data_source:
url: influxdb://SlowDash@localhost:8086/TestData/TestTimeSeries
parameters:
token: MY_TOKEN_HERE
meas,channel=CH1 value=VAL1 TIME
)If there is only one Tag for channels and one Field for data values, the simle configuration above works. For data containing multiple tags and/or fields, specify the names using schema
:
time_series:
schema: meas[channel]=value
meas ch1=VAL1,ch2=VAL2 TIME
)If the data does not have any Tag and all the Fields are used, the simle configuration above works. For data containing multiple tags and/or fields, specify the names using schema
:
time_series:
schema: meas=ch1,ch2
meas,channel=CH1 value_raw=VALRAW,value_cal=VALCAL TIME
)Use schema
to describe which tag and fields are used:
time_series:
schema: meas[channel]=value_raw(default),value_cal
object_time_series
entryslowdash_project:
data_source:
type: InfluxDB
parameters:
url: influxdb://SlowDash@localhost:8086/TestData
token: MY_TOKEN_HERE
object_time_series:
- measurement: TestTimeSeriesOfObjects
pip3 install redis
slowdash_project:
data_source:
type: RedisTS
parameters:
url: redis://localhost:6379
time_series: { db: 1 }
or in a short form:
slowdash_project:
data_source:
url: redis://localhost:6379/1
{Channel}_{Index}
, unless specified in user configuration.Index
is typically calculated by int(TimeStamp % RetentionLength)
.slowdash_project:
data_source:
type: RedisTS
parameters:
url: redis://localhost:6379
object_time_series: { db: 2 }
db
parameter of the object_time_series
indicates the database number for the time-series of objects. All the Redis-TS and Redis-JSON entries in this database will be interpreted in a specific way.Vaka
utility in SlowDash/vaka
can be used:import os, sys, time
import numpy as np
0, os.environ.get('VAKA_PKG', None)) # necessary if the VAKA package is not installed
sys.path.insert(import vaka as vk
= vk.DataStore_Redis(host='localhost', port=6379, db=1, retention_length=60)
redis = vk.Histogram('test_histogram_01', 100, 0, 10)
histogram = vk.Graph('test_graph_01', ['time', 'value'])
graph
redis.flush_db()
= 0, 0
tk, xk while True:
for i in range(10):
5, 2))
histogram.fill(np.random.normal(
= tk + 1
tk += np.random.normal(0.1, 1)
xk
graph.add_point(tk, xk)
redis.write_object_timeseries(histogram)
redis.write_object_timeseries(graph)
1) time.sleep(
key_value
section will be automatically taken.
Example of a Python script to write a “current” histogram:
import redis
= redis.Redis('localhost', 6379, 1)
r = {
hist 'bins': { 'min': 0, 'max': 100 },
'counts': [ 3, 5, 8, 14, 11, 3, 6, 4, 4, 1 ]
}set('hist00', '$', hist) r.json().
write_object_timeseries()
with write.object()
.[TODO] implement schema-based binding
Create a CouchDB view with the key being the time-stamp, e.g.:
function (doc) {
const record = {
"tag": doc.channel,
"field01": doc.value01,
"field02": doc.value02,
...
}emit(doc.timestamp, record);
}
Here the record fields can be scalars, JSON, or Blob-ID. Tags are optional. Multiple views can be created. Currently only the UNIX timestamp is accepted for the key.
The SlowDash schema description for CouchDB is similar to that of SQL DB, except that a view is used instead of a table. As timestamps are used for the keys of the view, the time information is not necessary in the schema description.
VIEW_NAME [TAG] = FIELD1, FIELD2, ...
VIEW_NAME
is DESIGN_DOCUMENT_NAME/INDEX_NAME
.
If a view contains only data fields, and all the data fields are taken into channels, the schema description can be a just view name.
Write the schema in the project configuration file:
slowdash_project:
data_source:
url: couchdb://USER:PASS@localhost:5984/MyDB
parameters:
time_series:
schema: MyDesignDoc/MyIndex
channels are scanned from the data, but old channels that do not appear in a last segment of data might not be found. In that case, explicitly list the channel names:
time_series:
schema: MyDesignDoc/MyIndex[Tag] = Field01, Field02, ...
tags:
list: [ 'Tag01', 'Tag02', ... ]
time_series:
schema: MyDesigDoc/MyIndex[Tag]
time_series:
schema: MyDesignDoc/MyIndex = Field01, Field02,...
time_series:
schema: MyDesigDoc/MyIndex[Tag] = Field01(default), Field02, ...
Flags are currently not supported. Modify the CouchDB view definition in a way that a tag includes flags.
Store JSON objects as data values:
function (doc) {
const record = {
"spectrum": {
{"labels": [ "Frequency (Hz)", "FFT" ],
"x": [ ... ],
"y": [ ... ]
}
}...
;
}emit(doc.timestamp, record);
}
Then specify the schema in object_time_series
:
slowdash_project:
data_source:
type: CouchDB
parameters:
url: couchdb://USER:PASS@localhost:5984/MyDB
object_time_series:
schema: MyDesignDoc/MyIndex
function (doc) {
const record = {
"photo": {
mime: doc._attachments[doc.name].content_type,
id: doc._id + '/' + doc.name,
meta: doc.parameters
}...
;
}emit(doc.timestamp, record);
}
A CouchDB view can be accessed as a table object with view_table
:
slowdash_project:
data_source:
type: CouchDB
parameters:
url: couchdb://USER:PASS@localhost:5984/MyDB
view_table:
name: DataTable
schema: MyDesignDoc/MyIndex = Field01, Field02, ..
A row of a CouchDB view can be accessed as a tree object with view_tree
:
slowdash_project:
data_source:
type: CouchDB
parameters:
url: couchdb://USER:PASS@localhost:5984/MyDB
view_tree:
name: DataRecord
schema: MyDesignDoc/MyIndex = Field01, Field02, ..
valid_from
.slowdash_project:
data_source:
type: YAML
parameters:
name: RunConfig
file: RunConfig.yaml
valid_from: now
[TODO]
[TODO]