...
_
...
Parameter name | Type | Optionality | Description | Constraints | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
server | string |
| specifies if the output should be more verbose default = https://secure.clevermaps.io | ||||||||||||
project | string |
| id of the project to import either either | ||||||||||||
dump | string |
| id of the dump to import either either
| ||||||||||||
string |
| ||||||||||||||
specify a for the metadata objects and data files | execution | enum | string |
| load and dump request execution type default = a | [sync, async] | specify a prefix for the metadata objects and data files | ||||||||
force | - |
| force the import even if there are model violations in the source project | ||||||||||||
dashboards | - |
| import dashboards only | ||||||||||||
| - |
| import datasets only | ||||||||||||
indicators | - |
| import indicators only | ||||||||||||
| - |
| import indicator drills only | ||||||||||||
markers | - |
| import markers only | ||||||||||||
markerSelectors | - |
| import marker selectors only | ||||||||||||
metrics | - |
| import metrics only | ||||||||||||
projectSettings | - |
| import project settings only | ||||||||||||
shares | - |
| import shares only | ||||||||||||
views | - |
| import views only | ||||||||||||
force | - |
| ignore source project skip failed dataset dumps (for projects with incomplete data) | ||||||||||||
skipData | boolean |
| skip data import |
importDatabase
Allows you to create datasets and import data from an external database.
This command reads the database metadata from which datasets are created, then imports the data and saves them as CSV files. You can choose to import either of which with --skipMetadata
and --skipData
parameters. Please note that this command does not create any other metadata objects than datasets. It's also possible to import only specific tables using the --tables
parameter.
Usage examples:
Code Block | ||||
---|---|---|---|---|
| ||||
// import all objects referenced from catchment_area_view including datasets & data
importProject --project djrt22megphul1a5 --server --cascadeFrom catchment_area_view
// import all objects referenced from catchment_area_view except including datasets & data
importProject --project djrt22megphul1a5 --server --cascadeFrom catchment_area_view --dashboards --exports --indicatorDrills --indicators --markerSelectors --markers --metrics --views
// import all objects referenced from catchment_area_dashboard
importProject --project djrt22megphul1a5 --server --cascadeFrom catchment_area_dashboard
// import all objects (datasets) referenced from baskets dataset - data model subset
importProject --project djrt22megphul1a5 --server --force --cascadeFrom baskets |
importDatabase
Allows you to create datasets and import data from an external database.
This command reads the database metadata from which datasets are created, then imports the data and saves them as CSV files. You can choose to import either of which with --skipMetadata
and --skipData
parameters. Please note that this command does not create any other metadata objects than datasets. It's also possible to import only specific tables using the --tables
parameter.
The database must be located on a running database server which is accessible under an URL. This can be on localhost, or anywhere on the internet. Valid credentials to the database are of course necessary.
...
Parameter name | Type | Optionality | Description | Constraints | ||||||
---|---|---|---|---|---|---|---|---|---|---|
engine | enum |
| name of the database engine | [postgresql] | ||||||
host | string |
| database server hostname for local databases, use | |||||||
port | integer |
| database server port | |||||||
schema | string |
| name of the database schema leave out if your engine does not support schemas, or the schema is | |||||||
database | string |
| name of the database | |||||||
user | string |
| user name for login to the database | |||||||
password | string |
| user's password | |||||||
tables | array |
| list of tables to import leave out if you want to import all tables from the database example = | |||||||
skipData | boolean |
| skip data import | |||||||
skipMetadata | boolean |
| skip metadata import |
Usage examples
...
theme | Midnight |
---|
...
:
Code Block | ||
---|---|---|
| ||
importDatabase --engine postgresql --host 172.16.254.1localhost --port 6543 --schema my_schema 5432 --database my_db --user postgres --password test importDatabase --tables orders,clients,stores |
loadCsv
Load data from a CSV file into a specified dataset.
...
engine postgresql --host 172.16.254.1 --port 6543 --schema my_schema --database my_db --user postgres --password test --tables orders,clients,stores |
loadCsv
Load data from a CSV file into a specified dataset.
loadCsv
also offers various CSV input settings. Your CSV file may contain specific features, like custom quote or separator characters. The parameters with the csv
prefix allow you to configure the data load to fit these features, instead of transforming the CSV file to one specific format. Special cases include the csvNull
and csvForceNull
parameters.
...
Parameter name | Type | Optionality | Description | Constraints | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
file | string |
| path to the CSV file one of | |||||||||||||||||||||
s3Uri | string |
| URI of an object on AWS S3 to upload (see examples below) one of | |||||||||||||||||||||
url | string |
| REQUIRED | path to the CSV file
|
HTTPS URL which contains a CSV file to be loaded into the dataset
one of file
, s3Uri
or url
parameters must be specified
dataset
Status | ||||
---|---|---|---|---|
|
mode
Status | ||||||
---|---|---|---|---|---|---|
|
incremental
mode appends the data to the end of the table
full
mode truncates the table and loads the table anew
[incremental, full]
execution
Status | ||||
---|---|---|---|---|
|
load request execution type
default = async
[incremental, full]
csvHeader
Status | ||||
---|---|---|---|---|
|
specifies if the CSV file to upload has a header
default = true
[true, false]
csvSeparator
Status | ||||
---|---|---|---|---|
|
specifies the CSV column separator character
default = ,
csvQuote
Status | ||||
---|---|---|---|---|
|
specifies the CSV quote character
default = "
csvEscape
Status | ||||
---|---|---|---|---|
|
specifies the CSV escape character
default = \
csvNull
Status | ||||
---|---|---|---|---|
|
csvForceNull
Status | ||||
---|---|---|---|---|
|
verbose
Status | ||||
---|---|---|---|---|
|
enables more verbose outputoutput
default = false
[true, false]
multipart
Status | ||||
---|---|---|---|---|
|
enables multipart file upload (recommended for files larger than 2 GB)
default = true
[true, false]
gzip
Status | ||||
---|---|---|---|---|
|
enables gzip compression
default = false
true
[true, false]
multipart
Status | ||||
---|---|---|---|---|
|
enables multipart file upload (recommended for files larger than 2 GB)
default = true
[true, false]
gzip
Status | ||||
---|---|---|---|---|
|
enables gzip compression
default = true
[true, false]
Usage examples:
Please note that your AWS S3 Access Key ID and Secret Access Key must be set using setup
command first.
Code Block | ||||
---|---|---|---|---|
| ||||
loadCsv --dataset orders --mode full --s3Uri s3://my-company/data/orders.csv --verbose |
Code Block | ||||
---|---|---|---|---|
| ||||
loadCsv --dataset orders --mode full --url http://www.example.com/download/orders.csv --verbose |
dumpCsv
Dump data from a specified dataset into a CSV file.
This command also offers the synchronous/asynchronous execution type, analogously to the loadCsv
command, as described above.
Parameter name | Type | Optionality | Description | Constraints | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
file | string |
| path to the CSV file | stored in config file | ||||||||||||||||||||||
dataset | string |
| dataset whose data will be dumped | execution | enum |
| dump request execution type default = a |
| dataset whose data will be dumped |
dumpProject
Dump project data & metadata to a directory. If the dump if successfull, the current dump is opened.
Parameter name | Type | Optionality | Description | Constraints | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
directory | string |
| directory to which the dump will be saved | stored in config file | ||||||||||
skip | - |
| skip metadata dump | |||||||||||
skipData | - |
| skip data dump | execution | enum | |||||||||
Status | Green | |||||||||||||
title | OPTIONAL | dump request execution type default = a | [sync, async] | |||||||||||
force | - |
| skip failed dataset dumps (for projects with incomplete data) |
...
Parameter name | Type | Optionality | Description | Constraints | |||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
project | string |
| validate any other project than the currently opened one | ||||||||||||||||||
skipModel | string |
| skip validations of the data model | ||||||||||||||||||
skipData | string |
| skip validations of the data itself | execution | enum |
| validation request execution type default = a | skip validations of the data itself |
Opened dump
addMetadata
Add a metadata object to the project. The file must be located in a currently opened dump, and in the correct directory.
...
Parameter name | Type | Optionality | Description | Constraints | ||||||
---|---|---|---|---|---|---|---|---|---|---|
type | enum |
| type of the object |
| ||||||
objectName | string |
| current name of the object (with or without .json extension) | |||||||
subtype | enum |
| name of the object copy (with or without .json extension) required only for | [basic, geometryPoint, geometryPolygon] | ||||||
file | string |
| path to the CSV file (located either in dump, or anywhere in the file system) required only for | |||||||
primaryKey | string |
| name of the CSV column that will be marked as primary key if not specified, first CSV column is selected | |||||||
geometry | string |
| name of the required only for | |||||||
csvSeparator | char |
| specifies custom CSV column separator character default = | |||||||
csvQuote | char |
| specifies custom CSV column quote character default = | |||||||
csvEscape | char |
| specifies custom CSV column escape character default = |
Usage examples:
Code Block | ||
---|---|---|
| ||
createMetadata --type dataset --subtype basic --objectName "baskets" --file "baskets.csv" --primaryKey "basket_id" createMetadata --type dataset --subtype geometryPoint --objectName "shops" --file "shops.csv" --primaryKey "shop_id" createMetadata --type dataset --subtype geometryPolygon --objectName "district" --geometry "districtgeojson" --file "district.csv" --primaryKey "district_code" |
...
Remove a metadata object from the project and from the dump. The file must be located in a currently opened dump, and must not be new.
...
Status | ||||
---|---|---|---|---|
|
...
from the dump. The file must be located in a currently opened dump, and must not be new.
Parameter name | Type | Optionality | Description | Constraints | ||||||
---|---|---|---|---|---|---|---|---|---|---|
objectName | string |
| name of the object (with or without .json extension) one of | |||||||
objectId | string |
| id of the object | |||||||
orphanObjects | boolean |
| prints a list of orphan object is an object not referenced from any of the project's views, or visible anywhere else in the app |
renameMetadata
Rename an object in a local dump and on the server. If the object is referenced in some other objects by URI (/rest/projects/$projectId/md/{objectType}?name=
), the references will be renamed as well.
...
This command basically wraps the functionality of loadCsv
. It collects all modified metadata to upload, and performs asynchronous full load of CSV files.
Parameter name | Type | Optionality | Description | Constraints | |||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
skipMetadata | - |
| skip metadata push | ||||||||||||||||||
skipData | - |
| skip data push | ||||||||||||||||||
skipValidate | - |
| skip the run of validate after push | execution | enum |
| load request execution type default = a | ||||||||||||||
force | boolean |
| force metadata push when there's a share breaking change | ||||||||||||||||||
verbose | boolean |
| enables more verbose output default = | [true, false] | |||||||||||||||||
multipart | boolean |
| enables multipart file upload (recommended for files larger than 2 GB) default = true | [true, false] | |||||||||||||||||
gzip | boolean |
| enables gzip compression default = true |
|
...