API client for GroundWork Renewables
JavaScript:
$ npm install @grndwork/api-client
Python:
$ pip install grndwork-api-client
JavaScript:
import {createClient} from '@grndwork/api-client';
const client = createClient();
Python:
from grndwork_api_client import create_client
client = create_client()
In order to access https://api.grndwork.com you must first obtain a refresh token from GroundWork Renewables.
The path to this file can be provided to the client using the GROUNDWORK_TOKEN_PATH
environment variable.
Or the subject and token values from this file can be provided using the GROUNDWORK_SUBJECT
and GROUNDWORK_TOKEN
environment variables.
When providing subject and token values GROUNDWORK_TOKEN_PATH
must not be set.
For methods that return lists, the javascript client returns a custom async iterable. You can consume this using:
for await (const station of client.getStations()) {
...
}
or
const stations = await client.getStations().toArray();
For methods that return lists, the python client returns a standard iterator. You can consume this using:
for station in client.get_stations():
...
or
stations = list(client.get_stations())
JavaScript:
createClient(
platform?: string | null,
options?: ClientOptions | null,
): Client
Python:
create_client(
platform: str | None,
options: ClientOptions | None,
) -> Client
Takes an optional platform string and options object and returns an API client instance.
Param | Type | Description |
---|---|---|
request_timeout | number | Seconds to wait between responses from server ( default: 30.0 ) |
request_retries | number | Number of times to retry failed request to server ( default: 3 ) |
request_backoff | number | Seconds to wait between retries to server ( default: 30.0 ) |
Provides the ability to get stations
JavaScript:
client.getStations(
query?: GetStationsQuery | null,
options?: {
page_size?: number | null,
},
): IterableResponse<StationWithDataFiles>
Python:
client.get_stations(
query: GetStationsQuery | None,
*,
page_size: int | None,
) -> Iterator[StationWithDataFiles]
Takes an optional get stations query object as an argument and returns a list of stations.
Param | Type | Description |
---|---|---|
station | string | Only return stations with UUID, name, or name matching pattern |
site | string | Only return stations for site with UUID, name, or name matching pattern |
client | string | Only return stations for client with UUID, name, or name matching pattern |
limit | number | Maximum number of stations to return |
offset | number | Number of stations to skip over before returning results |
Parameters that support patterns can use a wildcard *
at the beginning and/or end of the string.
Pattern matching is case insensitive.
For example:
JavaScript:
const stations = await client.getStations(
{station: 'Test*'},
).toArray();
Python:
stations = list(client.get_stations(
{'station': 'Test*'},
))
Would return all stations whose name starts with Test
.
You can set an optional page size to control the number of stations returned per request from the API. ( min: 1, max: 100, default: 100 )
JavaScript:
const stations = await client.getStations(
null,
{page_size: 50},
).toArray();
Python:
stations = list(client.get_stations(
None,
page_size=50,
))
Stations are returned in alphabetical order by station name.
[
{
"client_uuid": "286dfd7a-9bfa-41f4-a5d0-87cb62fac452",
"client_full_name": "TestClient",
"client_short_name": "TEST",
"site_uuid": "007bb682-476e-4844-b67c-82ece91a9b09",
"site_full_name": "TestSite",
"station_uuid": "9a8ebbee-ddd1-4071-b17f-356f42867b5e",
"station_full_name": "TestStation",
"description": "",
"model": "",
"type": "",
"status": "",
"project_manager": {
"full_name": "",
"email": ""
},
"maintenance_frequency": "",
"maintenance_log": "",
"location_region": "",
"latitude": 0,
"longitude": 0,
"altitude": 0,
"timezone_offset": -5,
"start_timestamp": "2020-01-01 00:00:00",
"end_timestamp": "2020-12-31 23:59:59",
"created_at": "2020-01-12T15:31:35.338369Z",
"updated_at": "2021-01-08T22:10:36.904328Z",
"data_files": [
{
"filename": "Test_OneMin.dat",
"is_stale": false,
"headers": {
"columns": ["Ambient_Temp"],
"units": ["Deg_C"],
"processing": ["Avg"]
},
"created_at": "2020-01-12T15:31:35.338369Z",
"updated_at": "2021-01-08T22:10:36.904328Z"
}
],
"data_file_prefix": "Test_"
}
]
Provides the ability to get reports
JavaScript:
client.getReports(
query?: GetReportsQuery | null,
options?: {
page_size?: number | null,
},
): IterableResponse<Report>
Python:
client.get_reports(
query: GetReportsQuery | None,
*,
page_size: int | None,
) -> Iterator[Report]
Takes an optional get reports query object as an argument and returns a list of reports.
Param | Type | Description |
---|---|---|
station | string | Only return reports for station with UUID, name, or name matching pattern |
site | string | Only return reports for site with UUID, name, or name matching pattern |
client | string | Only return reports for client with UUID, name, or name matching pattern |
before | date | Only return reports that end on or before date ( format: YYYY-MM-DD ) |
after | date | Only return reports that start on or after date ( format: YYYY-MM-DD ) |
limit | number | Maximum number of reports to return |
offset | number | Number of reports to skip over before returning results |
Parameters that support patterns can use a wildcard *
at the beginning and/or end of the string.
Pattern matching is case insensitive.
For example:
JavaScript:
const reports = await client.getReports(
{station: 'Test*'},
).toArray();
Python:
reports = list(client.get_reports(
{'station': 'Test*'},
))
Would return all reports for stations whose name starts with Test
.
You can set an optional page size to control the number of reports returned per request from the API. ( min: 1, max: 100, default: 100 )
JavaScript:
const reports = await client.getReports(
null,
{page_size: 50},
).toArray();
Python:
reports = list(client.get_reports(
None,
page_size=50,
))
Reports are returned in reverse chronological order.
[
{
"key": "GR_SRMReport_TEST_TestStation_3a810e02-3d29-4730-9325-41246046e3ac.pdf",
"package_name": "GR_SRMReport_TEST_TestStation_2020-12-31",
"kind": "legacy-solar-resource-measurement-station-report",
"station_uuid": "9a8ebbee-ddd1-4071-b17f-356f42867b5e",
"start_date": "2020-01-01",
"end_date": "2020-12-31",
"status": "COMPLETE",
"has_pdf": true,
"published_at": "2021-02-14T23:46:11.827148Z",
"created_at": "2020-01-12T15:31:35.338369Z",
"updated_at": "2021-01-08T22:10:36.904328Z",
"data_exports": [
{
"key": "TEST_TestStation_OneMin_4fcb5527-2b84-4b49-9623-9ffb0a0f8517.dat",
"filename": "Test_OneMin.dat",
"format": "",
"format_options": {},
"headers": {
"columns": ["Ambient_Temp"],
"units": ["Deg_C"],
"processing": ["Avg"]
},
"start_timestamp": "2020-01-01 00:00:00",
"end_timestamp": "2020-12-31 23:59:59",
"record_count": 525600,
"status": "COMPLETE",
"created_at": "2020-01-12T15:31:35.338369Z",
"updated_at": "2021-01-08T22:10:36.904328Z"
}
],
"files": [
{
"key": "GR_HourlyDataAggregation_TEST_TestStation_078d4a86-8145-4ef4-82c4-cdf81834306f.csv",
"filename": "GR_HourlyDataAggregation_TEST_TestStation.csv",
"description": "",
"type": "",
"created_at": "2020-01-12T15:31:35.338369Z",
"updated_at": "2021-01-08T22:10:36.904328Z"
}
]
}
]
Provides the ability to download a report package to a local folder
JavaScript:
client.downloadReport(
report: Report,
options?: {
destination_folder?: string | null,
max_concurrency?: number | null,
},
): Promise<Array<string>>
Python:
client.download_report(
report: Report,
*,
destination_folder: str | None,
max_concurrency: int | None,
) -> List[str]
Takes a report object as an argument, downloads all files for report and returns a list of files downloaded.
You can set a destination folder for the report files, otherwise the current working directory is used.
JavaScript:
const downloadedFiles = await client.downloadReport(
report,
{destination_folder: '/tmp'},
);
Python:
downloaded_files = client.download_report(
report,
destination_folder='/tmp',
))
You can set a max concurrency for how many files to downloaded in parallel. ( min: 1, default: 10 )
JavaScript:
const downloadedFiles = await client.downloadReport(
report,
{max_concurrency: 1},
);
Python:
downloaded_files = client.download_report(
report,
max_concurrency=1,
))
For the python client, threads are used for concurrency, to disable the use of threads set value to 1.
Provides the ability to get data files
JavaScript:
client.getDataFiles(
query?: GetDataFilesQuery | null,
options?: {
page_size?: number | null,
},
): IterableResponse<DataFile>
Python:
client.get_data_files(
query: GetDataFilesQuery | None,
*,
page_size: int | None,
) -> Iterator[DataFile]
Takes an optional get data files query object as an argument and returns a list of data files.
Param | Type | Description |
---|---|---|
filename | string | Only return data files with name or name matching pattern |
station | string | Only return data files for station with UUID, name, or name matching pattern |
site | string | Only return data files for site with UUID, name, or name matching pattern |
client | string | Only return data files for client with UUID, name, or name matching pattern |
limit | number | Maximum number of files to return |
offset | number | Number of files to skip over before returning results |
Parameters that support patterns can use a wildcard *
at the beginning and/or end of the string.
Pattern matching is case insensitive.
For example:
JavaScript:
const dataFiles = await client.getDataFiles(
{filename: '*_OneMin.dat'},
).toArray();
Python:
data_files = list(client.get_data_files(
{'filename': '*_OneMin.dat'},
))
Would return all one minute data files.
You can set an optional page size to control the number of files returned per request from the API. ( min: 1, max: 100, default: 100 )
JavaScript:
const dataFiles = await client.getDataFiles(
null,
{page_size: 50},
).toArray();
Python:
data_files = list(client.get_data_files(
None,
page_size=50,
))
Data files are returned in alphabetical order by filename.
[
{
"source": "station:9a8ebbee-ddd1-4071-b17f-356f42867b5e",
"source_start_timestamp": "2020-01-01 00:00:00",
"source_end_timestamp": "2020-12-31 23:59:59",
"filename": "Test_OneMin.dat",
"is_stale": false,
"headers": {
"columns": ["Ambient_Temp"],
"units": ["Deg_C"],
"processing": ["Avg"]
},
"created_at": "2020-01-12T15:31:35.338369Z",
"updated_at": "2021-01-08T22:10:36.904328Z"
}
]
Provides the ability to get data records for a given data file
JavaScript:
client.getDataRecords(
query: GetDataRecordsQuery,
options?: {
include_qc_flags?: boolean | null,
page_size?: number | null,
},
): IterableResponse<DataRecord>
Python:
client.get_data_records(
query: GetDataRecordsQuery,
*,
include_qc_flags: bool | None,
page_size: int | None,
) -> Iterator[DataRecord]
Takes a required get data records query object as an argument and returns a list of data records.
Param | Type | Description |
---|---|---|
filename | string | Data file name to return records for required |
limit | number | Maximum number of records to return ( default: 1 when before and after are not set ) |
before | timestamp | Only return records at or before timestamp ( format: YYYY-MM-DD hh:mm:ss ) |
after | timestamp | Only return records at or after timestamp ( format: YYYY-MM-DD hh:mm:ss ) |
By default each record will include the qc flags that apply to that data record. This behavior can be disabled.
For example:
JavaScript:
const dataRecords = await client.getDataRecords(
{filename: 'Test_OneMin.dat'},
{include_qc_flags: false},
).toArray();
Python:
data_records = list(client.get_data_records(
{'filename': 'Test_OneMin.dat'},
include_qc_flags=False,
))
Would return records without qc flags.
You can set an optional page size to control the number of records returned per request from the API. ( min: 1, max: 1500, default: 1500 )
JavaScript:
const dataRecords = await client.getDataRecords(
{filename: 'Test_OneMin.dat'},
{page_size: 60},
).toArray();
Python:
data_records = list(client.get_data_records(
{'filename': 'Test_OneMin.dat'},
page_size=60,
))
Data records are returned in reverse chronological order starting at the most recent timestamp.
[
{
"timestamp": "2020-01-01 00:00:00",
"record_num": 1000,
"data": {
"Ambient_Temp": 50
},
"qc_flags": {
"Ambient_Temp": 1
}
}
]
Provides the ability to get only the qc flags for a given data file
JavaScript:
client.getDataQC(
query: GetDataQCQuery,
options?: {
page_size?: number | null,
},
): IterableResponse<QCRecord>
Python:
client.get_data_qc(
query: GetDataQCQuery,
*,
page_size: int | None,
) -> Iterator[QCRecord]
Takes a required get data qc query object as an argument and returns a list of qc records.
Param | Type | Description |
---|---|---|
filename | string | Data file name to return records for required |
limit | number | Maximum number of records to return ( default: 1 when before and after are not set ) |
before | timestamp | Only return records at or before timestamp ( format: YYYY-MM-DD hh:mm:ss ) |
after | timestamp | Only return records at or after timestamp ( format: YYYY-MM-DD hh:mm:ss ) |
You can set an optional page size to control the number of records returned per request from the API. ( min: 1, max: 1500, default: 1500 )
JavaScript:
const qcRecords = await client.getDataQC(
{filename: 'Test_OneMin.dat'},
{page_size: 60},
).toArray();
Python:
qc_records = list(client.get_data_qc(
{'filename': 'Test_OneMin.dat'},
page_size=60,
))
QC records are returned in reverse chronological order starting at the most recent timestamp.
[
{
"timestamp": "2020-01-01 00:00:00",
"qc_flags": {
"Ambient_Temp": 1
}
}
]
Provides the ability to get both data files and records for those files via a nested iterator
JavaScript:
client.getData(
query?: GetDataFilesQuery | GetDataQuery | null,
options?: {
include_data_records?: boolean | null,
include_qc_flags?: boolean | null,
file_page_size?: number | null,
record_page_size?: number | null,
},
): IterableResponse<DataFile> | IterableResponse<DataFileWithRecords>
Python:
client.get_data(
query: GetDataFilesQuery | GetDataQuery | None,
*,
include_data_records: bool | None,
include_qc_flags: bool | None,
file_page_size: int | None,
record_page_size: int | None,
) -> Iterator[DataFile] | Iterator[DataFileWithRecords]
Takes an optional get data query object as an argument and returns a list of data files.
Param | Type | Description |
---|---|---|
filename | string | Only return data files with name or name matching pattern |
station | string | Only return data files for station with UUID, name, or name matching pattern |
site | string | Only return data files for site with UUID, name, or name matching pattern |
client | string | Only return data files for client with UUID, name, or name matching pattern |
limit | number | Maximum number of files to return |
offset | number | Number of files to skip over before returning results |
records_limit | number | Maximum number of records to return per file ( default: 1 when before and after are not set ) |
records_before | timestamp | Only return records at or before timestamp ( format: YYYY-MM-DD hh:mm:ss ) |
records_after | timestamp | Only return records at or after timestamp ( format: YYYY-MM-DD hh:mm:ss ) |
Parameters that support patterns can use a wildcard *
at the beginning and/or end of the string.
Pattern matching is case insensitive.
For example:
JavaScript:
const dataFiles = await client.getData(
{filename: '*_OneMin.dat'},
).toArray();
Python:
data_files = list(client.get_data(
{'filename': '*_OneMin.dat'},
))
Would return all one minute data files.
When this option is set true, data records are returned for each data file in reverse chronological order starting at the most recent timestamp.
For example:
JavaScript:
for await (const dataFile of client.getData(
null,
{include_data_records: true},
)) {
const dataRecords = await dataFile.records.toArray();
}
Python:
for data_file in client.get_data(
None,
include_data_records=True,
):
data_records = list(data_file['records'])
Would return data files with records.
By default each record will include the qc flags that apply to that data record. This behavior can be disabled.
For example:
JavaScript:
for await (const dataFile of client.getData(
null,
{
include_data_records: true,
include_qc_flags: false,
},
)) {
const dataRecords = await dataFile.records.toArray();
}
Python:
for data_file in client.get_data(
None,
include_data_records=True,
include_qc_flags=False,
):
data_records = list(data_file['records'])
Would return records without qc flags.
You can set an optional page size to control the number of files returned per request from the API. ( min: 1, max: 100, default: 100 )
JavaScript:
const dataFiles = await client.getData(
null,
{file_page_size: 50},
).toArray();
Python:
data_files = list(client.get_data(
None,
file_page_size=50,
))
You can set an optional page size to control the number of records returned per request from the API. ( min: 1, max: 1500, default: 1500 )
JavaScript:
const dataFiles = await client.getData(
null,
{record_page_size: 60},
).toArray();
Python:
data_files = list(client.get_data(
None,
record_page_size=60,
))
Data files are returned in alphabetical order by filename.
[
{
"source": "station:9a8ebbee-ddd1-4071-b17f-356f42867b5e",
"source_start_timestamp": "2020-01-01 00:00:00",
"source_end_timestamp": "2020-12-31 23:59:59",
"filename": "Test_OneMin.dat",
"is_stale": false,
"headers": {
"columns": ["Ambient_Temp"],
"units": ["Deg_C"],
"processing": ["Avg"]
},
"created_at": "2020-01-12T15:31:35.338369Z",
"updated_at": "2021-01-08T22:10:36.904328Z",
"records": [
{
"timestamp": "2020-01-01 00:00:00",
"record_num": 1000,
"data": {
"Ambient_Temp": 50
},
"qc_flags": {
"Ambient_Temp": 1
}
}
]
}
]
Provides the ability to create data files and upload records to those files
JavaScript:
client.postData(
payload: PostDataPayload,
): Promise<void>
Python:
client.post_data(
payload: PostDataPayload,
) -> None
Takes a post data payload object as an argument and uploads it to the cloud.
Param | Type | Description |
---|---|---|
source | string | The station that collected the data |
files | Array | Array of data files ( min length: 1, max length: 20 ) |
files[].filename | string | Filename using the format `<OneMin |
files[].headers | DataFileHeaders | Optional headers for the file |
files[].headers.meta | Record<string, string> | User defined meta data for the file |
files[].headers.columns | Array | Array of column names matching the data keys |
files[].headers.units | Array | Array of units for the columns |
files[].headers.processing | Array | Array of processing used for column data (Min, Max, Avg) |
files[].records | Array | Array of data records for file ( max length: 100 combined across all files ) |
files[].records[].timestamp | timestamp | The timestamp of the data record in UTC ( format: YYYY-MM-DD hh:mm:ss ) |
files[].records[].record_num | number | Positive sequential number for records in file |
files[].records[].data | Record<string, any> | Data for record, keys should match header.columns
|
overwrite | boolean | Whether to overwrite existing data records when timestamps match |