Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

_

...

Parameter nameTypeOptionalityDescriptionConstraints
serverstring
Status
colourGreen
titleOPTIONAL

specifies if the output should be more verbose

default = https://secure.clevermaps.io


projectstring
Status
colourYellow
titleVARIES

id of the project to import

either (info) either project or dump must be specified


dumpstring
Status
colourYellow
titleVARIES

id of the dump to import

either (info) either project or dump must be specified

prefix



string
Status
colourGreen
titleOPTIONAL


specify a prefix for the metadata objects and data filesexecutionenumstring
Status
colourGreen
titleOPTIONAL

load and dump request execution type

default = async

[sync, async]specify a prefix for the metadata objects and data files
force-
Status
colourGreen
titleOPTIONAL
force the import even if there are model violations in the source project
dashboards-
Status
colourGreen
titleOPTIONAL
import dashboards only
datasets-
Status
colourGreen
titleOPTIONAL
import datasets only
indicators-
Status
colourGreen
titleOPTIONAL
import indicators only
indicatorDrills
-
Status
colourGreen
titleOPTIONAL
import indicator drills only
markers-
Status
colourGreen
titleOPTIONAL
import markers only
markerSelectors-
Status
colourGreen
titleOPTIONAL
import marker selectors only
metrics-
Status
colourGreen
titleOPTIONAL
import metrics only
projectSettings-
Status
colourGreen
titleOPTIONAL
import project settings only
shares-
Status
colourGreen
titleOPTIONAL
import shares only
views-
Status
colourGreen
titleOPTIONAL
import views only
force-
Status
colourGreen
titleOPTIONAL

ignore source project validate errors and proceed with import anyway

skip failed dataset dumps (for projects with incomplete data)


skipDataboolean
Status
colourGreen
titleOPTIONAL
skip data import

importDatabase

Allows you to create datasets and import data from an external database.

This command reads the database metadata from which datasets are created, then imports the data and saves them as CSV files. You can choose to import either of which with --skipMetadata and --skipData parameters. Please note that this command does not create any other metadata objects than datasets. It's also possible to import only specific tables using the --tables parameter.


Usage examples:
Code Block
themeMidnight
titleCascade import examples
// import all objects referenced from catchment_area_view including datasets & data
importProject --project djrt22megphul1a5 --server --cascadeFrom catchment_area_view

// import all objects referenced from catchment_area_view except including datasets & data
importProject --project djrt22megphul1a5 --server --cascadeFrom catchment_area_view --dashboards --exports --indicatorDrills --indicators --markerSelectors --markers --metrics --views

// import all objects referenced from catchment_area_dashboard
importProject --project djrt22megphul1a5 --server --cascadeFrom catchment_area_dashboard

// import all objects (datasets) referenced from baskets dataset - data model subset
importProject --project djrt22megphul1a5 --server --force --cascadeFrom baskets

importDatabase

Allows you to create datasets and import data from an external database.

This command reads the database metadata from which datasets are created, then imports the data and saves them as CSV files. You can choose to import either of which with --skipMetadata and --skipData parameters. Please note that this command does not create any other metadata objects than datasets. It's also possible to import only specific tables using the --tables parameter.

The database must be located on a running database server which is accessible under an URL. This can be on localhost, or anywhere on the internet. Valid credentials to the database are of course necessary.

...

Parameter nameTypeOptionalityDescriptionConstraints
engineenum
Status
colourRed
titleREQUIRED

name of the database engine

[postgresql]
hoststring
Status
colourRed
titleREQUIRED

database server hostname

for local databases, use localhost


portinteger
Status
colourRed
titleREQUIRED
database server port
schemastring
Status
colourGreen
titleOPTIONAL

name of the database schema

leave out if your engine does not support schemas, or the schema is public


databasestring
Status
colourRed
titleREQUIRED
name of the database
userstring
Status
colourRed
titleREQUIRED
user name for login to the database
passwordstring
Status
colourRed
titleREQUIRED
user's password
tablesarray
Status
colourGreen
titleOPTIONAL

list of tables to import

leave out if you want to import all tables from the database

example = "orders,clients,stores"


skipDataboolean
Status
colourGreen
titleOPTIONAL
skip data import
skipMetadataboolean
Status
colourGreen
titleOPTIONAL
skip metadata import
Usage examples

...

themeMidnight

...

:
Code Block
themeMidnight
importDatabase --engine postgresql --host 172.16.254.1localhost --port 6543 --schema my_schema 5432 --database my_db --user postgres --password test
importDatabase --tables orders,clients,stores

loadCsv

Load data from a CSV file into a specified dataset.

...

engine postgresql --host 172.16.254.1 --port 6543 --schema my_schema --database my_db --user postgres --password test --tables orders,clients,stores

loadCsv

Load data from a CSV file into a specified dataset.

loadCsv also offers various CSV input settings. Your CSV file may contain specific features, like custom quote or separator characters. The parameters with the csv prefix allow you to configure the data load to fit these features, instead of transforming the CSV file to one specific format. Special cases include the csvNull and csvForceNull parameters.

...

path to the CSV file
Parameter nameTypeOptionalityDescriptionConstraints
filestring
Status
colourYellow
titleVARIES

path to the CSV file

(info) one of file, s3Uri or url parameters must be specified


s3Uristring
Status
colourYellow
titleVARIES

URI of an object on AWS S3 to upload (see examples below)

(info) one of file, s3Uri or url parameters must be specified


urlstring
Status
colour
Red
Yellow
title
REQUIRED
VARIES

HTTPS URL which contains a CSV file to be loaded into the dataset

(info) one of file, s3Uri or url parameters must be specified


datasetstring
Status
colourRed
titleREQUIRED
dataset to into which the data should be loaded
modeenum
Status
colourRed
titlerequired

incremental mode appends the data to the end of the table

full mode truncates the table and loads the table anew

[incremental, full]executionenum
Status
colourGreen
titleOPTIONAL

load request execution type

default = async

[sync, async

[incremental, full]

csvHeader

boolean
Status
colourGreen
titleOPTIONAL

specifies if the CSV file to upload has a header

default = true

[true, false]

csvSeparatorchar
Status
colourGreen
titleOPTIONAL

specifies the CSV column separator character

default = ,


csvQuotechar
Status
colourGreen
titleOPTIONAL

specifies the CSV quote character

default = "


csvEscapechar
Status
colourGreen
titleOPTIONAL

specifies the CSV escape character

default = \


csvNullstring
Status
colourGreen
titleOPTIONAL
specifies the replacement of custom CSV null values
csvForceNullenum
Status
colourGreen
titleOPTIONAL
specifies which CSV columns should enforce the null replacement
verboseboolean
Status
colourGreen
titleOPTIONAL

enables more verbose outputoutput

default = false

[true, false]multipartboolean
Status
colourGreen
titleOPTIONAL

enables multipart file upload (recommended for files larger than 2 GB)

default = true

[true, false]gzipboolean
Status
colourGreen
titleOPTIONAL

enables gzip compression

default = falsetrue

[true, false]multipartboolean
Status
colourGreen
titleOPTIONAL

enables multipart file upload (recommended for files larger than 2 GB)

default = true

[true, false]gzipboolean
Status
colourGreen
titleOPTIONAL

enables gzip compression

default = true

[true, false]
Usage examples:

Please note that your AWS S3 Access Key ID and Secret Access Key must be set using setup command first.

Code Block
themeMidnight
titleLoad CSV from AWS S3
loadCsv --dataset orders --mode full --s3Uri s3://my-company/data/orders.csv --verbose
Code Block
themeMidnight
titleLoad CSV from HTTPS URL
loadCsv --dataset orders --mode full --url http://www.example.com/download/orders.csv --verbose

dumpCsv

Dump data from a specified dataset into a CSV file.

This command also offers the synchronous/asynchronous execution type, analogously to the loadCsv command, as described above.

[sync, async]
Parameter nameTypeOptionalityDescriptionConstraints
filestring
Status
colourGreen
titleOPTIONAL
path to the CSV filestored in config file
datasetstring
Status
colourRed
titleREQUIRED
dataset whose data will be dumpedexecutionenum
Status
colourGreen
titleOPTIONAL

dump request execution type

default = async

titleREQUIRED
dataset whose data will be dumped

dumpProject

Dump project data & metadata to a directory. If the dump if successfull, the current dump is opened.

colour
Parameter nameTypeOptionalityDescriptionConstraints
directorystring
Status
colourGreen
titleOPTIONAL
directory to which the dump will be saved

stored in config file

skipMetadata-
Status
colourGreen
titleOPTIONAL
skip metadata dump
skipData-
Status
colourGreen
titleOPTIONAL
skip data dumpexecutionenum
Status
Green
titleOPTIONAL

dump request execution type

default = async

[sync, async]
force-
Status
colourGreen
titleOPTIONAL
skip failed dataset dumps (for projects with incomplete data)

...

[sync, async]
Parameter nameTypeOptionalityDescriptionConstraints
projectstring
Status
colourGreen
titleOPTIONAL
validate any other project than the currently opened one
skipModelstring
Status
colourGreen
titleOPTIONAL
skip validations of the data model


skipDatastring
Status
colourGreen
titleOPTIONAL

skip validations of the data itself

executionenum
Status
colourGreen
titleOPTIONAL

validation request execution type

default = async

skip validations of the data itself


Opened dump

addMetadata

Add a metadata object to the project. The file must be located in a currently opened dump, and in the correct directory.

...

Parameter nameTypeOptionalityDescriptionConstraints
typeenum
Status
colourRed
titleREQUIRED
type of the object

[dataset]

objectNamestring
Status
colourRed
titleREQUIRED
current name of the object (with or without .json extension)
subtypeenum
Status
colourYellow
titleVARIES

name of the object copy (with or without .json extension)

required only for dataset type

[basic, geometryPoint, geometryPolygon]
filestring
Status
colourYellow
titleVARIES

path to the CSV file (located either in dump, or anywhere in the file system)

required only for dataset type


primaryKeystring
Status
colourGreen
titleOPTIONAL

name of the CSV column that will be marked as primary key

if not specified, first CSV column is selected


geometrystring
Status
colourYellow
titleVARIES

name of the geometry key

required only for geometryPolygon subtype


csvSeparatorchar
Status
colourGreen
titleOPTIONAL

specifies custom CSV column separator character

default = ,


csvQuotechar
Status
colourGreen
titleOPTIONAL

specifies custom CSV column quote character

default = "


csvEscapechar
Status
colourGreen
titleOPTIONAL

specifies custom CSV column escape character

default = \


Usage examples:
Code Block
themeMidnight
createMetadata --type dataset --subtype basic --objectName "baskets" --file "baskets.csv" --primaryKey "basket_id"
createMetadata --type dataset --subtype geometryPoint --objectName "shops" --file "shops.csv" --primaryKey "shop_id"
createMetadata --type dataset --subtype geometryPolygon --objectName "district" --geometry "districtgeojson" --file "district.csv" --primaryKey "district_code"

...

Remove a metadata object from the project and from the dump. The file must be located in a currently opened dump, and must not be new.

...

Status
colourRed
titleREQUIRED

...

from the dump. The file must be located in a currently opened dump, and must not be new.

Parameter nameTypeOptionalityDescriptionConstraints
objectNamestring
Status
colourYellow
titleVARIES

name of the object (with or without .json extension)

(info) one of objectName, objectId or orphanObjects parameters must be specified


objectIdstring
Status
colourYellow
titleVARIES
id of the object
orphanObjectsboolean
Status
colourYellow
titleVARIES

prints a list of removeMetadata commands to delete orphan metadata objects

orphan object is an object not referenced from any of the project's views, or visible anywhere else in the app


renameMetadata

Rename an object in a local dump and on the server. If the object is referenced in some other objects by URI (/rest/projects/$projectId/md/{objectType}?name=), the references will be renamed as well.

...

This command basically wraps the functionality of loadCsv. It collects all modified metadata to upload, and performs asynchronous full load of CSV files.

[sync, async]
Parameter nameTypeOptionalityDescriptionConstraints
skipMetadata-
Status
colourGreen
titleOPTIONAL
skip metadata push
skipData-
Status
colourGreen
titleOPTIONAL
skip data push
skipValidate-
Status
colourGreen
titleOPTIONAL
skip the run of validate after pushexecutionenum
Status
colourGreen
titleOPTIONAL

load request execution type

default = async


forceboolean
Status
colourGreen
titleOPTIONAL
force metadata push when there's a share breaking change
verbose


boolean
Status
colourGreen
titleOPTIONAL

enables more verbose output

default = false

[true, false]
multipartboolean
Status
colourGreen
titleOPTIONAL

enables multipart file upload (recommended for files larger than 2 GB)

default = true

[true, false]
gzipboolean
Status
colourGreen
titleOPTIONAL

enables gzip compression

default = true

[true, false]

...