Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

CleverMaps Shell is controlled solely by a set of predefined commands. These commands can be divided in 4 categories, or workflow states. All commands and parameters are case sensitive.

Each command can be further specified by parameters, which can have default values. Each parameter also has a "--" prefix, which is a technicality, and is not mentioned in the tables below, for the sake of readability.

Parameters with string type take string input as a value, if they are mentioned. Parameters with enum type have predefined enumeration of strings, which can be passed as a value. Parameters with boolean type can be passed either true, false or no value (=true).

Workflow states

  • Started - you have started the tool

  • Connected to server - you have successfully logged in to your account on a specific server

  • Opened project - you have opened a project you have access to

  • Opened dump - you have created dump, or opened an existing one

_create

...

Tool state

...

Command

...

Started

...

login

...

setup

...

Connected to server

...

openProject

...

listProjects

...

createProject

...

cloneProject

...

editProject

...

deleteProject

...

Opened project

...

importProject

...

importDatabase

...

loadCsv

...

dumpCsv

...

dumpProject

...

openDump

...

truncateProject

...

validate

...

Opened dump

...

addMetadata

...

createMetadata

...

removeMetadata

...

renameMetadata

...

copyMetadata

...

restoreMetadata

...

pushProject

...

status

...

fetch

...

applyDiff

...

diff

_

Started state

...

login

Log in as a user to CleverMaps with correct credentials.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

accessToken

...

string

...

Status
colourGreen
titleOPTIONAL

...

generated CleverMaps access token

(see how to get one)

...

stored in config file

...

bearerToken

...

string

...

Status
colourGreen
titleOPTIONAL

...

JWT token generated after signing with limited 1h validity

...

dumpDirectory

...

string

...

Status
colourGreen
titleOPTIONAL

...

directory where your dumps will be stored

...

stored in config file

...

server

...

string

...

Status
colourGreen
titleOPTIONAL

...

server to connect to

default = https://secure.clevermaps.io

...

stored in config file

...

proxyHost

...

string

...

Status
colourGreen
titleOPTIONAL

...

proxy server hostname

...

stored in config file

...

proxyPort

...

integer

...

Status
colourGreen
titleOPTIONAL

...

proxy server port

...

stored in config file

...

s3AccessKeyId

...

string

...

Status
colourGreen
titleOPTIONAL

...

AWS S3 Access Key ID

required for S3 upload (loadCsv --s3Uri)

...

stored in config file

...

s3SecretAccessKey

...

string

...

Status
colourGreen
titleOPTIONAL

...

AWS S3 Secret Access Key

required for S3 upload (loadCsv --s3Uri)

...

stored in config file

...

setup

Store your config and credentials in a file so you don't have to specify them each time you log in.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

accessToken

...

string

...

Status
colourGreen
titleOPTIONAL

...

generated CleverMaps access token

(see how to get one)

...

stored in config file

...

server

...

string

...

Status
colourGreen
titleOPTIONAL

...

server to connect to

...

stored in config file

...

dumpDirectory

...

string

...

Status
colourGreen
titleOPTIONAL

...

directory where your dumps will be stored

...

stored in config file

...

proxyHost

...

string

...

Status
colourGreen
titleOPTIONAL

...

proxy server hostname

...

stored in config file

...

proxyPort

...

integer

...

Status
colourGreen
titleOPTIONAL

...

proxy server port

...

stored in config file

...

s3AccessKeyId

...

string

...

Status
colourGreen
titleOPTIONAL

...

AWS S3 Access Key ID

required for S3 upload (loadCsv --s3Uri)

...

stored in config file

...

s3SecretAccessKey

...

string

...

Status
colourGreen
titleOPTIONAL

...

AWS S3 Secret Access Key

required for S3 upload (loadCsv --s3Uri)

...

stored in config file

Connected to server

...

openProject

Open a project and set as current. Opens dump if it exists.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

project

...

string

...

Status
colourRed
titleREQUIRED

...

Project ID to be opened

...

listProjects

List all projects avaliable to you on the server.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

verbose

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

specifies if the output should be more verbose

...

share

...

enum

...

Status
colourGreen
titleOPTIONAL

...

list projects by share type

...

[demo, dimension, template]

...

organization

...

string

...

Status
colourGreen
titleOPTIONAL

...

list projects by organization (organization ID)

...

createProject

Create a new project and opens it.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

title

...

string

...

Status
colourRed
titleREQUIRED

...

title of the project

...

description

...

string

...

Status
colourGreen
titleOPTIONAL

...

description of the project. A description can be formatted by markdown syntax.

...

organization

...

string

...

Status
colourGreen
titleOPTIONAL

...

ID of the organization which will become the owner of the project

...

cloneProject

Clones project from source and opens it. Cloning is handled on the server, so no data or metadata is transferred to the local Shell environment.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

project

...

string

...

Status
colourRed
titleREQUIRED

...

Project ID of the project from which new project will be cloned

...

organization

...

string

...

Status
colourRed
titleREQUIRED

...

ID of the organization which will become the owner of the project

...

description

...

string

...

Status
colourGreen
titleOPTIONAL

...

description of the project. A description can be formatted by markdown syntax.

...

title

...

string

...

Status
colourGreen
titleOPTIONAL

...

title of the project, if none provided defaults to = "Clone of [projectName]"

...

editProject

Edit project properties.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

project

...

string

...

Status
colourRed
titleREQUIRED

...

Project ID of project to be edited

...

newTitle

...

string

...

Status
colourGreen
titleOPTIONAL

...

new title of the project

...

newDescription

...

string

...

Status
colourGreen
titleOPTIONAL

...

new description of the project

...

newStatus

...

string

...

Status
colourGreen
titleOPTIONAL

...

new status of the project

...

[enabled, disabled]

...

newOrganization

...

string

...

Status
colourGreen
titleOPTIONAL

...

new ID of the organization which will become the owner of the project

...

deleteProject

Delete an existing project.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

project

...

string

...

Status
colourRed
titleREQUIRED

...

Project ID of the project to be deleted

Opened project

...

importProject

Allows you to import:

You can also import a part of a project with one of the parameters (dashboards, datasets, indicators, indicatorDrills, markers, markerSelectors, metrics, views). If you specify none of these parameters, the whole project will be imported. Everytime you specify a datasets parameter, corresponding data will be imported.

Before each import, validate command is called in the background. If there are any model validation violations in the source project, the import is stopped, unless you also provide the --force parameter.

During the import, the origin key is set to all metadata objects. This key indicates the original location of the object (server and the project). This has a special use in case of datasets & data import. import first takes a look at what datasets currently are in the project and compares them with datasets that are about to be imported. Datasets that are not present in the destination project will be imported automatically. In case of datasets that are present in the destination project, 3 cases might occur:

  • if they have the same name and origin, the dataset will not be imported

  • if they have the same name but different origin, a warning is shown and the dataset will not be imported

  • if a prefix is specified, all source datasets will be imported

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

project

...

string

...

Status
colourRed
titleREQUIRED

...

Project ID of the project from which files will be imported

...

dump

...

boolean

...

Status
colourYellow
titleVARIES

...

import project from local dump

(info) project must be specified

...

serverSide

...

boolean

...

Status
colourYellow
titleVARIES

...

performs import of specified project on server

(info) project must be specified

...

cascadeFrom

...

string

...

Status
colourGreen
titleOPTIONAL

...

cascade import object and all objects it references

see usage examples below

...

prefix

...

string

...

Status
colourGreen
titleOPTIONAL

...

specify a prefix for the metadata objects and data files

...

dashboards

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import dashboards only

...

dataPermissions

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import data permission only

...

datasets

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import datasets only

...

exports

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import exports only

...

indicators

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import indicators only

...

indicatorDrills

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import indicator drills only

...

markers

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import markers only

...

markerSelectors

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import marker selectors only

...

metrics

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import metrics only

...

projectSettings

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import project settings only

...

shares

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import shares only

...

views

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import views only

...

force

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

ignore source project validate errors and proceed with import anyway

skip failed dataset dumps (for projects with incomplete data)

default = false

...

skipData

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

skip data import

default = false

Usage examples:
Cascade import examples
Code Block
// import all objects referenced from catchment_area_view including datasets & data
importProject --project djrt22megphul1a5 --cascadeFrom catchment_area_view

// import all objects referenced from catchment_area_view except including datasets & data
importProject --project djrt22megphul1a5 --cascadeFrom catchment_area_view --dashboards --exports --indicatorDrills --indicators --markerSelectors --markers --metrics --views

// import all objects referenced from catchment_area_dashboard
importProject --project djrt22megphul1a --cascadeFrom catchment_area_dashboard

// import all objects (datasets) referenced from baskets dataset - data model subset
importProject --project djrt22megphul1a5 --force --cascadeFrom baskets

...

importDatabase

Allows you to create datasets and import data from an external database.

This command reads the database metadata from which datasets are created, then imports the data and saves them as CSV files. You can choose to import either of which with --skipMetadata and --skipData parameters. Please note that this command does not create any other metadata objects than datasets. It's also possible to import only specific tables using the --tables parameter.

The database must be located on a running database server which is accessible under an URL. This can be on localhost, or anywhere on the internet. Valid credentials to the database are of course necessary.

So far, the command supports these database engines:

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

engine

...

enum

...

Status
colourRed
titleREQUIRED

...

name of the database engine

...

[postgresql]

...

host

...

string

...

Status
colourRed
titleREQUIRED

...

database server hostname

for local databases, use localhost

...

port

...

integer

...

Status
colourRed
titleREQUIRED

...

database server port

...

schema

...

string

...

Status
colourGreen
titleOPTIONAL

...

name of the database schema

leave out if your engine does not support schemas, or the schema is public

...

database

...

string

...

Status
colourRed
titleREQUIRED

...

name of the database

...

user

...

string

...

Status
colourRed
titleREQUIRED

...

user name for login to the database

...

password

...

string

...

Status
colourRed
titleREQUIRED

...

user's password

...

tables

...

array

...

Status
colourGreen
titleOPTIONAL

...

list of tables to import

leave out if you want to import all tables from the database

example = "orders,clients,stores"

...

skipData

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

skip data import

default = false

...

skipMetadata

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

skip metadata import

default = false

Usage examples:
Code Block
importDatabase --engine postgresql --host localhost --port 5432 --database my_db --user postgres --password test
importDatabase --engine postgresql --host 172.16.254.1 --port 6543 --schema my_schema --database my_db --user postgres --password test --tables orders,clients,stores

...

loadCsv

Load data from a CSV file into a specified dataset.

loadCsv also offers various CSV input settings. Your CSV file may contain specific features, like custom quote or separator characters. The parameters with the csv prefix allow you to configure the data load to fit these features, instead of transforming the CSV file to one specific format. Special cases include the csvNull and csvForceNull parameters.

  • csvNull allows you to specify a value, which will be interpreted as a null value

    • e.g. "false" or "_"

  • csvForceNull then specifies on which columns the custom null replacement should be enforced

    • e.g. "name,title,description"

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

file

...

string

...

Status
colourYellow
titleVARIES

...

path to the CSV file

(info) one of file, s3Uri or url parameters must be specified

...

s3Uri

...

string

...

Status
colourYellow
titleVARIES

...

URI of an object on AWS S3 to upload (see examples below)

(info) one of file, s3Uri or url parameters must be specified

...

url

...

string

...

Status
colourYellow
titleVARIES

...

HTTPS URL which contains a CSV file to be loaded into the dataset

(info) one of file, s3Uri or url parameters must be specified

...

dataset

...

string

...

Status
colourRed
titleREQUIRED

...

name of dataset into which the data should be loaded

...

mode

...

enum

...

Status
colourRed
titlerequired

...

data load mode

incremental mode appends the data to the end of the table

full mode truncates the table and loads the table anew

...

[incremental, full]

...

csvHeader

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

specifies if the CSV file to upload has a header

default = true

...

csvSeparator

...

char

...

Status
colourGreen
titleOPTIONAL

...

specifies the CSV column separator character

default = ,

...

csvQuote

...

char

...

Status
colourGreen
titleOPTIONAL

...

specifies the CSV quote character

default = "

...

csvEscape

...

char

...

Status
colourGreen
titleOPTIONAL

...

specifies the CSV escape character

default = \

...

csvNull

...

string

...

Status
colourGreen
titleOPTIONAL

...

specifies the replacement of custom CSV null values

...

csvForceNull

...

enum

...

Status
colourGreen
titleOPTIONAL

...

specifies which CSV columns should enforce the null replacement

...

verbose

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

enables more verbose output

default = false

...

multipart

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

enables multipart file upload (recommended for files larger than 2 GB)

default = false

...

gzip

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

enables gzip compression

default = true

Usage examples:

Please note that your AWS S3 Access Key ID and Secret Access Key must be set using setup command first.

Load CSV from AWS S3
Code Block
loadCsv --dataset orders --mode full --s3Uri s3://my-company/data/orders.csv --verbose
Load CSV from HTTPS URL
Code Block
loadCsv --dataset orders --mode full --url http://www.example.com/download/orders.csv --verbose

...

dumpCsv

Dump data from a specified dataset into a CSV file.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

dataset

...

string

...

Status
colourRed
titleREQUIRED

...

name of the dataset do dump

...

force

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

overwrites dumped data

...

dumpProject

Dump project data and metadata to a directory. If the dump is successfull, the current dump is opened.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

skipMetadata

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

skip metadata dump

default = false

...

skipData

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

skip data dump

default = false

...

force

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

overwrites current dump

default = false

...

nativeDatasetsOnly

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

import only native datasets (without origin attribute)

default = false

...

ignoreFailedDatasets

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

skip failed dataset dumps (for projects with incomplete data)

default = false

...

openDump

Open dump of project.

This command has no parameters.

...

truncateProject

Deletes all metadata and data from the project.

This command has no parameters.

...

validate

Validate the project's data model and data integrity. An update of dataset definition (metadata) should be followed by data update. Typically, it helps to make a full load of data for updated dataset (see loadCsv command). During the full load an original DWH table is dropped and data is loaded into a new table which is created regards the updated dataset definition.

If user updates just a metadata model, it can cause an inconsistency between metadata model and DWH database model. The validate command is used to detect these problems.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

project

...

string

...

Status
colourGreen
titleOPTIONAL

...

project ID of other project which will be validated

...

skipModel

...

string

...

Status
colourGreen
titleOPTIONAL

...

skip validations of the data model. Model validation compares a metadata definition of dataset with a table definition in DWH database (DDL). This validation is fast, no data are validated.

default = false

...

skipData

...

string

...

Status
colourGreen
titleOPTIONAL

...

skip validations of the data itself

default = false

List of violations types:

...

validation category

...

violation

...

violation description

...

model

...

MissingTableValidationException

...

missing table in DWH database for given dataset

...

model

...

MissingColumnValidationException

...

missing column in DWH table that is present in dataset definition

...

model

...

NotMatchingDataTypeColumnException

...

data type of a columns in DWH tables differs from a type in dataset definition

...

model

...

MisorderColumnValidationException

...

order of columns in DWH table does not match order of properties in dataset definition

...

model

...

MissingPrimaryKeyColumnValidationException

...

primary key defined in dataset definition was not found in DWH table

...

model

...

UnexpectedColumnValidationException

...

an extra column was found in DWH table compared with the dataset definition

...

model

...

MismatchingForeignKeyDataTypeException

...

a foreign key column data type must be the same as a data type of referenced primary key

...

data

...

NotUniquePrimaryKeyValidationException

...

duplicated values has been found in a primary key column

...

data

...

NanInNumericColumnValidationException

...

NaN value has been found in a numeric column

...

data

...

ReferenceIntegrityValidationException

...

compares foreign key values with a referenced values of a primary key column. Missing values are reported.

Opened dump

...

addMetadata

Add new metadata object and upload it to the project. The file must be located in a currently opened dump, and in the correct directory.

If the --objectName parameter is not specified, addMetadata will add all new objects in the current dump.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

objectName

...

string

...

Status
colourGreen
titleOPTIONAL

...

name of the object (with or without .json extension)

Info

Updating existing metadata objects

When modifying already added (uploaded) metadata objects use pushProject command for uploading modified objects to the project.

...

createMetadata

Create new metadata object.

(info) At this moment, only dataset type is supported. Datasets are generated from a provided CSV file.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

type

...

enum

...

Status
colourRed
titleREQUIRED

...

type of the object to create

...

[dataset]

...

objectName

...

string

...

Status
colourRed
titleREQUIRED

...

name of the object to create

...

subtype

...

enum

...

Status
colourYellow
titleVARIES

...

name of the object copy (with or without .json extension)

required only for dataset type

...

[basic, geometryPoint, geometryPolygon]

...

file

...

string

...

Status
colourYellow
titleVARIES

...

path to the CSV file (located either in dump, or anywhere in the file system)

required only for dataset type

...

primaryKey

...

string

...

Status
colourYellow
titleVARIES

...

name of the CSV column that will be marked as primary key

required only for dataset type

...

geometry

...

string

...

Status
colourYellow
titleVARIES

...

name of the geometry key

required only for geometryPolygon subtype

...

csvSeparator

...

char

...

Status
colourGreen
titleOPTIONAL

...

specifies the CSV column separator character

default = ,

...

csvQuote

...

char

...

Status
colourGreen
titleOPTIONAL

...

specifies the CSV quote character

default = "

...

csvEscape

...

char

...

Status
colourGreen
titleOPTIONAL

...

specifies CSV escape character

default = \

Usage examples:
Code Block
createMetadata --type dataset --subtype basic --objectName "baskets" --file "baskets.csv" --primaryKey "basket_id"
createMetadata --type dataset --subtype geometryPoint --objectName "shops" --file "shops.csv" --primaryKey "shop_id"
createMetadata --type dataset --subtype geometryPolygon --objectName "district" --geometry "districtgeojson" --file "district.csv" --primaryKey "district_code"

...

removeMetadata

Remove a metadata object from the project and from the dump. The file must be located in a currently opened dump, and must not be new.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

objectName

...

string

...

Status
colourYellow
titleVARIES

...

name of the object (with or without .json extension)

(info) one of objectName, objectId or orphanObjects parameters must be specified

...

objectId

...

string

...

Status
colourYellow
titleVARIES

...

ID of the object

...

orphanObjects

...

boolean

...

Status
colourYellow
titleVARIES

...

prints a sequence of removeMetadata commands to delete orphan metadata objects

orphan object is an object not referenced from any of the project's views, or visible anywhere else in the app

...

renameMetadata

Rename a metadata object in a local dump and on the server. If the object is referenced in some other objects by URI (/rest/projects/$projectId/md/{objectType}?name=), the references will be renamed as well.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

objectName

...

string

...

Status
colourRed
titleREQUIRED

...

current name of the object (with or without .json extension)

...

newName

...

string

...

Status
colourRed
titleREQUIRED

...

new name of the object (with or without .json extension)

...

copyMetadata

Create a copy of an object existing in a currently opened dump.

This command unwraps the object from a wrapper, renames it and removes generated common syntax keys. If the objectName and newName arguments are the same, the object is unwrapped only.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

objectName

...

string

...

Status
colourRed
titleREQUIRED

...

current name of the object (with or without .json extension)

...

newName

...

string

...

Status
colourRed
titleREQUIRED

...

name of the object copy (with or without .json extension)

...

restoreMetadata

Restore metadata objects from a local dump onto server.

If the --objectName parameter is not specified, restoreMetadata will restore, add and push all changed objects in the dump.
For objects present on server and not in local dump, restoreMetadata prints list of removeMetadata commands to delete server objects.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

objectName

...

string

...

Status
colourGreen
titleOPTIONAL

...

name of the object (with or without .json extension)

...

pushProject

Upload all modified files (data & metadata) to the project.

This command basically wraps the functionality of loadCsv. It collects all modified metadata to upload, and performs full load of CSV files.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

skipMetadata

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

skip metadata push

default = false

...

skipData

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

skip data push

default = false

...

skipValidate

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

skip the run of validate after push

default = false

verbose

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

enables more verbose output

default = false

...

multipart

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

enables multipart file upload (recommended for files larger than 2 GB)

default = false

...

gzip

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

enables gzip compression

default = true

...

status

Check the status of a currently opened dump against the project on the server.

This command detects files which have been locally or remotely changed, are missing in dump, and also detects files which have a syntax error or constraint violations.

When --remote is used command only displays metadata content on the server.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

remote

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

list metadata content on the server

default = false

...

fetch

Fetch objects that have changed on server and update local objects.

Server objects are compared with local objects and following action can happen:

  • object changed on server and not locally modified is dumped

  • object changed on server and locally modified creates conflict (see conflict)

  • object deleted on server and not locally modified is deleted

  • object deleted on server and locally modified is unwrapped and can be added again with addMetadata

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

objectName

...

string

...

Status
colourGreen
titleOPTIONAL

...

name of the object to fetch (with or without .json extension)

...

force

...

boolean

...

Status
colourGreen
titleOPTIONAL

...

overwrite local changes

Conflict

When object is modified on server and also locally conflict occurs. Conflicts are written to metadata objects and have to be resolved.

Conflict example:
Code Block
...
			"visualizations": {
<<<<<<< local
                "grid": false
=============
                "grid": true
>>>>>>> server
            },
...

...

applyDiff

Create and apply metadata diff between two live projects.

This command compares all metadata objects of the currently opened project and project specified in the --sourceProject command, and applies changes to the currently opened dump. Metadata objects in dump can be either:

  • added (completely new objects that aren't present in currently opened project)

  • modified

  • deleted (deleted objects not present in the sourceProject)

When the command finishes, you can review the changes applied to the dump using either status or diff comands. The command then tells you to perform specific subsequent steps. This can be one of (or all) these commands:

  • addMetadata (to add the new objects)

  • pushProject (to push the changes in modified objects)

  • removeMetadata (to remove the deleted objects - a command list which must be copypasted to Shell is generated (info))

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

sourceProject

...

string

...

Status
colourRed
titlerequired

...

Project ID of the source project

...

objectTypes

...

array

...

Status
colourGreen
titleOPTIONAL

...

list of object types to be compared

example = "views,indicators,metrics"

...

diff

Compare local metadata objects with those in project line by line.

If the --objectName parameter is not specified, all wrapped modifed objects are compared.

...

Parameter name

...

Type

...

Optionality

...

Description

...

Constraints

...

objectName

...

string

...

Status
colourGreen
titleOPTIONAL

...

name of a single object to compare (with or without .json extension)

The command outputs sets of changes - "deltas" done in each object. Each object can have multiple deltas, which are followed by header with syntax:

Code Block
/{objectType}/{objectName}.json
[ A1 A2 | B1 B2 ]
...

Where:
A1 = start delta line number in the dump file
A2 = end delta line number in the dump file
B1 = start delta line number in the remote file
B2 = end delta line number in the remote file

Specific output example:

...

Panel
panelIconId1f389
panelIcon:tada:
panelIconText🎉
bgColor#E3FCEF

The content you are trying to reach has been moved here: https://docs.clevermaps.io/docs/command-list

We are proud to announce that we have launched a new documentation.

Please update your saved links and bookmarks to follow a new address docs.clevermaps.io.