Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. It has a unique subject URI, and exactly one asserted rdf:type statement.
  2. Its rdf:type (possibly an inferred type) is a member of the designated class group indicating embedded instances. In the eagle-i data model the URI of this class group is:
    Code Block
    http://eagle-i.org/ont/app/1.0/ClassGroup_embedded_class
  3. It has exactly one parent. There is exactly one instance, which is the subject of statements for which the EI is the object. This is an informal restriction (really an assumption) imposed on all instances of embedded types, although it is enforced by logic in the repository.
  4. EIs may not be Orphaned or Shared. Any transaction that would result in an EI left without a parent, in other words it is **{}not* the object of any conforming statements, *or it has multiple parents, is forbidden by logic in the repository. The only way to remove an EI from its parent is to delete all of its statements. You can copy an EI to another parent by creating a new EI under that parent, with a new unique URI for its subject.
  5. No Broken Links to EIs. If an EI is removed, an instance cannot retain any statements of which it is the object. These must be removed.
  6. EIs do not Have Metadata. The repository does not create metadata, Dublin Core, for example, about EIs. Any transactions on the EI are considered transactions on its parent and recorded as such, in the last-modified date in the metadata.
  7. Transactional Integrity All transactional operations, such as /update and workflow, operate on the EI statements together with the parent instance's statements. For example, a workflow transition to published moves all the EIs to the published graph along with their parent's statements.
  8. EIs reside in the same graph as the parent. Though it seems obvious, it's worth stating formally that the statements describing an EI must reside in the same named graph (workspace) as the statements of its parent.

...

Request must include either the name argument or all.

URL: /repository/graph (GET)

Args:
format---ps all (accessible) graphs. Format must be one that encodes quads, e.g. TriG or TriX, or an error will occur.
inferred---include inferred statements as well. By default they are left out. This argument does not need a value, it is boolean so just including its name as a query arg will assert it. (SEE WARNING BELOW)

Result:
Output document is the serialization of the selected RDF statements. Any failure returns an appropriate HTTP status.

WARNING: When you dump a graph with the inferred option on, be sure it does NOT get re-loaded into the repository because then the inferred statements will become explicit and will not get re-computed when necessary. It was mainly intended to be used in testing.

Access:
Requires read access on the selected graph(s). When all is specified, non-readable graphs are ignored.

Load Serialized RDF into Named Graph (/graph - POST, PUT)

Import a serialized RDF document into a specific named graph, optionally replacing all previous contents. The repository records some provenance metadata about when and who did the import, and a description of the source material.

If the name parameter is not specified, and the format supports named graphs or quads (e.g. TriG or TriX), then the graph name is taken from the ingested data. This is very dangerous, but is useful for e.g. restoring an entire repository from a backup, for example. It requires administrator role.

...

Note on File Format and Character Set: The request specifies the file format and/or character set of the serialized RDF data as a Content-Type value, e.g. for example:

Code Block
 text/rdf+n3; charset="ISO-8859-1"

. The character set defaults to Unicode UTF-8, so if your source data is not in that character set you must declare it. This can be provided in several different places ,they are searched in this order of priority, and hte the first one found is the only one considered:

format---query argument value. This takes precedence because some clients may not allow complete control of the content-type headers.
Content-Type---header on the value of the content entity in a POST request.
Content-Type---header in the body of a PUT request.

URL: /repository/graph (PUT or POST)

Args:
name=named-graph-URI---Required name of the affected graph. Mutually excluseive with all.
all---loads serialized data into all (accessible) graphs. Format must be one that encodes quads, e.g. TriG or TriX.
action=(add|replace|delete)---Required, determines whether new statements are added to existing contents of the graph, replace the contents, or are deleted from teh graph.
format---MIME type input serialization format, overrides the Content-Type in HTTP protocol.
type=(metadata|ontology|workspace|internal|published)---Keyword representing the type of content in this named graph.
label=text---value for rdfs:label property of the named graph, which appears as its title.
content=---The immediate content of serialized RDF to load into the graph. ONLY used when method is POST, but then it is required.
source=string---The file or URL to record as the source of the data being ingested into the graph. Only applies to add and replace actions. Overrides any filename specified with the content, which would otherwise be recorded as the source identifier. It is recorded as the dcterms:identifier property of the dcterms:source node in the metadata.
sourceModified=dateTimeStamp---last-modified time for the source of data being ingested. Value must be parsed as XSD date-time expression. It is recorded as the dcterms:modified property of the dcterms:source node in the metadata.

Result:
HTTP status indicates success.

Access:
If the graph already exists, requires add access (and remove access action is replace or delete). If there is no existing graph by that name, only the administrator can create a new one. For multi-graph (all) load, requires superuser access.
Logout (/logout)
Destroys any lingering session state and credentials for the current user. Users of shared or public computers need a way to positively end a session so that subsequent users do not get their access to the repository.

NOTE: Beware of this if you are using a Web browser to access the repository . The repository uses HTTP Basic Authentication to identify users. Most web browsers cache the last Basic Authentication credentials for a site and don't offer an easy way to clear it.

For Mozilla Firefox 3, try the Web Developer add-on and select Miscellaneous->Clear Private Data->HTTP Authentication from its menus.

URL: /repository/logout (POST only)

Args: none.

Result:
HTTP status indicates success. Succeeds even if there was no session to destroy.

Access:
Requires a login and established session.

Logout (/logout)

Destroys any lingering session state and credentials for the current user. Users of shared or public computers need a way to positively end a session so that subsequent users do not get their access to the repository.

NOTE: Beware of this if you are using a Web browser to access the repository The repository uses HTTP Basic Authentication to identify users. Most web browsers cache the last Basic Authentication credentials for a site and don't offer an easy way to clear it.

For Mozilla Firefox 3, try the Web Developer add-on, and select Miscellaneous-> Clear Private Data-> HTTP Authentication from its menu.

URL: /repository/logout (POST only)

*Args: none.

Result:
HTTP status indicates success. Succeeds even if there was no session to destroy.

*Access:
Requires a login and established session.

Check or Load Data Model Ontology (/model) - Admin Only

This service either reports on or loads the Data Model Ontology. The GET request returns a report of the version of the ontology loaded, and the version that is available to be automatically loaded from within the repository's codebase. The POST request give a choice of selective update or forcibly replacing the ontology.

URL: /repository{}/admin/model (GET)

Args:
format--- MIME-type of desired output format

Returns a tabular result of two columns:

  1. loaded---the version of the data model ontology currently loaded, if any. For example, "0.4.6"
  2. available---the version of data model ontology that is available to be loaded.

The value of loaded is the result of this query over the ontology view:

Code Block
 select * where {<http://purl.obolibrary.org/obo/ero.owl> owl:versionInfo ?loaded} 

URL: /repository/admin/model (POST)

Args:
action=(update|load)
source=(jar)

Loads a new copy of the data model ontology, replacing the entire existing contents (if any) of its named graph. For more details about how the graph name and data model source are determined, see The Data Model Configuration Properties Guide.

When action=update, only load the ontology if the version of the current ontology is different from the version of the available one (also if there is nothing loaded yet). Note that it does not attempt to judge if the available version is later, just different.

When action=load, the "available" ontology is always forcibly loaded.

The source argument determines where the ontology data comes from. When source=jar, which is recommended, the ontology is the collection of all OWL files in the maven package eagle-i-model-owl}}as it was loaded into the repository's source base. This ought to reflect the version of the ontology for which the sofware was designed. When {{source=model, the EIOntModel is used as the data source. Although there should be no difference between them, because the EIOntModel is loaded in to Jena from the same source files, in practice there appears to be some differences.

Manage Internal Graphs (/internal)

URL: /repository{}/internal (GET, POST)

This service manages the internal named graphs which are (usually) automatically initialized by the repository upon bootstrap. Each of these graphs has an initializer data file of seralized RDF that is a versioned part of hte source and gets built into the webapp. This API service lets you get the initializers and read or query them so that e.g. an upgrade procedure can integrate differences into a locally-modified internal graph.

Extra unrelated action: When action=decache is specified, this service tells the running repository that it should decache any in-memory caches of internal RDF metadata because internal graphs have been changed behind its back, e.g. by the /internal request.

This service is intended for repository internal maintenance and testing procedures ONLY. If{ you are planning to use it in a user application, you are very likely making a bad mistake. Its API is unstable and will probably change without any notice.

GET Method:
name=uri - optional, URI of the internal graph to report on, default is to list all known intenral graphs.
  format=mimetype - optional, default is negotiated, must be tuple query result format.

Result is a tabular query response with teh following columns:

  1. name--- URI of the graph
  2. loaded---version of initializer that was loaded, if any
  3. available---version of initializer that is avaialble and would be loaded.

POST Method:
action=read|load|query|decache*---required, specifies what to do
name=uri---required when action=load,read,query; in fact, can be repeated for query.
format=mime---mime type of response: when action=read must be a serialized RDF format; when action=query must be suitable for query result type.
query*=_sparql-text---text of query to run over initializer for graph(s)

This is a multi-function service, depending on the value of action: When action=load, replaces contents of an internal graph with hte initializer in the webapp. For action=read, returns contents of initializer for given graph in the result docuemnt. For action=query, runs given SPARQL query over contents of initializer(s) for given graphs (this is the ONLY form that can take multiple graphs). For action=decache, tells repository server to decache all in-memory data derived from RDF content in case, for example, the internal graphs have been modified or updated. All of these actions require Administrator privilege.

Harvest Resource Metadata (/harvest)

This service harvests the "metadata", which is actually the resource instance data from the repository. It is intended to be used by an external search index application to keep its index up-to-date by incremental updates, since that is much more efficient than periodically rebuilding the index.

It returns a result in any of the SPARQL query tuple response formats, selected by the format arg. The columns are:

  1. subject---the resource instance's URI
  2. predicate---predicate of a statement about the resource (only present when detail=full)
  3. value---value of a statement about the resource (only present when detail=full)

Note that deleted resource instances are only indicated when a report over a time span is selected. When the from argument is not specified, the complete current contents of the repository are returned so deleted instances are simply not indicated. We assume the caller is rebuilding its index from scratch so there is no need to remove instances that have been deleted.

Property Filtering

The set of properties returned when detail=full is automatically filtered to remove hidden and contact properties as dictated by the access privileges of the user making the request. The filtering follows the same access-control rules as the Dissemination service.
URL: /harvest; (GET or POST method)

Args:
format---optionally override the dissemination format that would be chosen by HTTP content negotiation. Note that choosing text/html results in a special human-readable result.
view---optionally choose a different view dataset (see Views in concepts section) from which to select the graph for dissemination. Mutually exclusive with workspace. Default is the published-resources view. Be careful to choose a view that does not include repository user instances.
workspace---URI of the workspace named graph to take the place of the default graph.  Relevant metadata and ontology graphs are included automatically.  Mutually exclusive with view.
inferred---When true, includes all inferred statements from the generated results.  This really only applies to rdf:type statements; default is false, so  inferred types are left out of the results.Must be false when detail=identifier.
from---(optional) only resource instances last modified from (on or later than) this timestamp or later are shown.  See Date/Time Format below.
after---(optional) does the same thing as from, with which it is mutually exclusive, only the time is exclusive rather than inclusive.  That is, only resources modified later than this timestamp are shown.
not implemented: until---(optional, not implemented yet) only resource instances last modified until this timestamp or before are shown. See Date/Time Format below. NOTE: this may not be fully implemented until versioning is done.
embedded---(boolean) - include URIs of Embedded Instances (see Concepts section above) in a report at the detail=identifier level.  Has no effect at detail=full.  This is usually not even a good idea, was included for troubleshooting.
detail=(identifier|full)---adjust level of detail.  Required.  This affects the way deleted items are represented in the results:

  • When identifier is chosen, only the identifier of each resource is returned in the subject column, AND deleted subjects are indicated by prefixing the URI with a fixed URI prefix such as "info:deleted#" to represent that the URI in the fragment is deleted.
  • For full results, a deleted item is indicated with a synthetic statement: the predicate :isDeleted and a boolean true value.

About Date/Time Format:
The date/time expression for the from and until args must be in the XML date format, which is a slightly restricted version of the ISO8601 date format. Note that if the time component is included, hours, minutes and seconds must be specified.  Here are some examples:

  • 2010-06-28 - June 28, 2010, the start of the day (just past midnight).
  • 2010-06-28T13:45:06 - June 28 2010, 13:45:06 in the local time zone
  • 2010-06-28T13:45:06.123 - June 28 2010, 13:45:06 and 123 milliseconds,  in the local time zone
  • 2010-06-28T17:45:06Z - June 28 2010, 17:45:06 in UTC (GMT, Zulu) time
  • 2010-06-28T13:45:06-04:00 - June 28 2010, 13:45:06 in the time zone 4 hours west of Greenwich

Result:
Response document is a SPARQL tabular ("select") query result, formatted according to the chosen or negotiated format. 

Only resources in the named graphs selected by the view or workspace arg in the query are shown. For time-span queries, deleted instances in all graphs will be returned, so some of the delete notifications may not correspond to items in the caller's index. (On the other hand, it is also possible for a resource instance to be created and deleted in between harvests, so there is always the possibility of getting a deleted notice for a resource that the caller never indexed.)

About Incremental Updates:
The 'from' argument is intended to let you get incremental changes to keep a search index up to date. However, it is essential that you give each incremental request a timestamp that accurately reflects the actual time on the server of the last harvest. Since there are delays in transmission and server and client may not have synchronized clocks, you should not rely on the client's time for this. Each /harvest response includes an HTTP header Last-Modified which has the time of the last modification to the repository.This is guaranteed to give you all of the relevant changes in the next harvest. Unfortunately, since that timestamp comes back in HTTP format, you will have to parse it and translate it to XML format; also, it only has 1 second granularity so you may see some "false positives" for resources updated less than a second after that last-modified time, which also appeared in the last incremental harvest.

Precise last-modified timestamp: the /harvest service also adds a non-standard HTTP header to its responses to describe the last-modified time in full precision down to the millisecond level. This allows an application to give that precise time as the after argument value on the next incremental harvest to be sure of capturing any changes since that last /harvest call. The header name and date format looks like:

Code Block
 X-Precise-Last-Modified:Mon, 10 Jan 2011 20:49:10.770 GMT 

Note that the time format is essentially the HTTP time format with ".mmm" appended to the time for milliseconds.  It must be converted to the XML timestamp format to serve as an argument value to "after". By the time it was obvious what a stupid choice this was, it was too late in the development cycle to change easily, but perhaps in the future an XML-format precise timestamp header can be added.

Access control:
Read permission is required on all desired graphs.

Export and Import of Users, Resource Instances (/export, /import)

This service lets you export the users OR data contents of one repository and import them into a different one, or a later instance of the "same" repository. It is intended to migrate and mirror data between different releases of the software and ontology, or different deployments created for testing. Although the same service can handle both "user" and "resource" instances, these are two separate operations and cannot be combined.

Each export operation creates a single file of serialized RDF data. DO NOT MODIFY OR EDIT IT, or the results will be undefined. Especially in the user-export function, there are some statements included ONLY as cues to the import service, so altering or removing them will have unpredictable results.

Import Roles before Users: When importing Users who might have some different Roles from the Roles already existing in your repository, if you wish to preserve those role memberships, import Roles first.

In general, importing data into an existing repository poses some problems of which you need to be aware:

  • If the URI prefix is different in the destination, do you transform the subject URIs so they are resolvable, or keep the original URIs to match the source?
  • How do you want to handle a duplicate instance in the import (i.e. when the same instance already exists in the repository)?
  • Should a resource instance import be allowed to create new named graphs?

URL: /export (GET or POST method)

Args:
format---MIME type of the serialization format to be output. By default, it is chosen by HTTP content negotiation. Must be capable of encoding quads; default is TRiG (application/x-trig).
type=(user|resource|transition|role|grant)---which type of data is to be exported, one MUST be chosen and there is no default.
view---(only when type=resource) optionally choose a different view dataset from which to select the graph(s) for export. Mutually exclusive with workspace. Default is the published-resources view. Be careful to choose a view that does not include repository user instances.
workspace---URI of the workspace named graph to take the place of the default graph. Relevant metadata and ontology graphs are included automatically. Mutually exclusive with view.
include=list..---Explicit list of user or resource instances to include in the export.Format is a series of resource URIs separated by commas (",") and optional spaces. Users may also be referenced by username.
exclude=list..---Explicit list of user or resource instances to exclude from the export. Mutually exclusive with include, format is the same as the include list.

Result of an export request is a document of serialized RDF data in the requested format, unless the HTTP status indicates an error. Note that inferred statements are never included. This document is ONLY meaningful to an /import request by the same or another data repository. It is not intended to be human readable or meaningful. Its contents may not necessarily reflect the source repository's content directly.

Access control: Only the superuser may export users. Read access on the resident named graphs is required to export resource instances.

About Export/Import of Access Grants: For most types, the export includes access grants since these are considered administrative metadata. The type of "grant" is included so you can export and import just the grants on any URI, mainly to give you a way to transfer the access grants on Named Graphs (Workspaces). Also see the ignoreACL option on /import.

About Format and Charset of Imported Document: By default, format is indicated by the content arg's Content-Type header, at least when it is a separate entity in a POST entity body of type multipart/form-data. However, this is not always easy to control in an HTTP client. You can override it with the format arg. If a charset parameter is not specified in the content-type value, it defaults to UTF-8. Also note that the format must be capable of encoding quads; e.g. TRiG (application/x-trig).

URL: /import (POST method only)

Args:
format - the MIME type (and charset) of the input document's serialization format. This overrides any content-type on the content arg's entity.
type=(user|resource|transition|role|grant) - which type of data is to be imported, one MUST be chosen and there is no default. MUST match the type of the exported data.
content=stream.. - the serialized RDF data to import, must have been generated by export of the same type of data.
transform=(yes|no) - Required, no default. When 'yes', URIs of imported instances are transformed into new, unique URIs resolvable by the importing repository. When 'no', URIs are left as they are.
duplicate=(abort|ignore|replace) - how to handle an import that would result in a duplicate object. Default is abort.
newgraph=(abort|create) - how to handle a resource instance import that would result in creating a new named graph for the data. Must be superuser to choose 'create'. Default is abort.
include=list.. - Explicit list of user or resource instances to include in the import. Format is a series of resource URIs separated by commas (",") and optional spaces. Users may also be referenced by username. IMPORTANT NOTE: The URIs in the include (and exclude) lists are matched against URIs in the imported source file, NOT the repository.
exclude=list.. - Explicit list of user or resource instances to exclude from the import. Mutually exclusive with include, format is the same. Useful for avoiding conflicts e.g. when loading a user export into a repository where the initial admin user already exists.
ignoreACL=(true|false) - Ignore all access control grants on imported objects. The imported objects will not have any specific access grants.

Send Contact Email - /emailContact
This service was originally updated as a very rushed proof-of-concept and has not been rigorously designed, so it does not have a specification of behavior. This section only documents how to call it, the internal actions it takes are still under discussion.

URL: /emailContact (POST method only)
Args:
uri=subject-URI - required
client_ip=IP addr - required
from_name=personal name - required
from_email=mailbox - required
subject=text
message=text
test_mode - default false, true if present.

Sends an email message to teh designated contact for the resource identified by uri. The exact rules for determining that contact address have not been formally specified yet. When a contact cannot be determiend, the mail is sent to the configured postmaster for hte repo. See the Admin Guide for details about configuring that and the rest of the email service.
Workflow Services REST API
Show Transitions
/repository/workflow/transitions
Method: GET or POST
Args:
workspace=URI - restirct results to transitions applying to given workspace (includes wildcards). Default is to list all.
format=mime - tyep of result.

Returns the selected workflow transitions as SPARQL tabular results.
Result columns are:

Subject URI
Label
Description
Workspace URI
Workspace Label
initial-state URI
initial-state Label
final-state URI
final-state Label
allowed - boolean literal, true if current user has permission on this transition

Show Resources
/repository/workflow/resources
Method: GET, POST
Args:
state=URI|all - workflow state by which to filter, or 'all' for wildcard.
type=URI - include only instances of this tyep (by inference counts) - default is no restriction.
unclaimed=(true|false) - when true, unclaimed resources are included in the report. Default is true.
owner=(self|all|none) - show, in addition to any selected unclaimed resources:
self - those claimed by current user
all - instances claimed by anyone
none - includes no claimed resources.
Default is self. Note that the combination unclaimed=false and owner=none is illegal.
workspace=URI - restirct results to resources in a given workspace; optional, default is all avaialble worksaces.
format=mime - tyep of result, one of hte SPARQL query formats.
detail=(brief|full) - columns to include in results, see description below.
addPattern=SPARQL-text - add this triple pattern to SPARQL query. May refer to variables as described below. Value is a literal hunk of triple pattern ready to be inserted into the patterns of a SELECT query. It may refer to the result variables mentioned below, e.g. ?r_type.
addResults=query-variable.. - add these variables to results columns. value is expected to be a space-separated series of query variables (which must appear in the triple pattern), e.g. "?lab ?labname"
addModifiers=modifiers.. - add a SPARQL modifiers clause to the query. Modifers include ORDER BY, LIMIT, and OFFSET and can be used to implement pagination.

This request returns descriptions of a selected set of resources in SPARQL tabular result form. It is intended to be used to populate menus and autocomplete vocabularies, as well as more complex table views.

The brief level of detail includes columns:
r_subject - URI of resource instance
r_label - label of resource instance
r_type - URI of instance's asserted type
The full level of detail adds the fields:
r_created - created date from provenance
r_owner - URI of workflow claimant if any
r_ownerLabel - the rdfs:label of r_owner if it is bound (and has a label)
r_state - URI of workflow state
Any query variables you specified in the addResults list are added to either result.

Claim Resource Instance
/repository/workflow/claim
Method: POST
Args:
uri=URI - subject to claim
user=UserURI - optional, user who asserts the claim; default is authenticated user.

Asserts a claim on the given instance. This requires the following:

user must have access to a transition out of the current state.
user must have read access to instance
there is no existing claim on the instance.

There is no result document. The status indicates success with 200. As side-effects, this actino also:

Adds insert and delete access to the instance for the claiming user.
Sets :hasWorkflowOwner property to the claimant (in an internal metadata graph)

Release (claimed) Resource Instance
/repository/workflow/release
Method: POST
Args:
uri=URI - subject to release

Requires that current user is the owner, or Administrator role.. As side effects:

Removes insert and delete access to the instance for the current user.
Removes :hasWorkflowOwner property (in an internal metadata graph)

Transition on Resource Instance
/repository/workflow/push
Method: POST
Args:
uri=URI - subject to transition
transition=URI - indicates transition to take

Requires:

read access to instance.
must be claimed
user should be either claim owner or Administrator
user must have READ access on the chosen Transition.

Side-effects include:

Implied release of any current claim (and all applicable side-effects of that)
Executes action associated with this transaction, if any.
Resource's workflow state will become the final state of the transition.

Admin UI Support Services
There are also some REST services created to support the administrative GUI (the JSP pages). They are NOT intended to be part of the regular API since their exact function and parameter structure is dictated by the JSPs and is thus still somewhat fluid. Most of them also require Administrator access.

If you use these services, it is at your own risk, and thus they are not documented. See the JSPs for usage examples if you are really curious. The URLs are as follows:

/repository/admin/updateUser
/repository/admin/updateGrants
/repository/admin/updateRole
/repository/admin/updateTransition
/repository/admin/updateNamedGraph

Site Home Page
Since the repository webapp is installed as the ROOT webapp (so it can have control over the URL space for the purpose of resolving all of its resource instances), it also controls the site's top-level "home" (index) page. The site home page is not part of the API, however, so it does not need to control the content of that page.

In order to give the site administrator control over the content of the site home page, the repo lets you redirect it to any URL, under the control of a configuration property. See eaglei.repository.index.url in the configuration properties.

Services to Support Admin GUI Pages
These REST services were included to support the administrative GUI, which is implemented as JSPs. They are targeted precisely at serving as the back end of the JSPs, so their range of function is strictly limited and they are effectively "write-only". Typically you call on the service and the HTTP status says whether it succeeded or failed; it does not return a detailed report.

Another feature of these services is that they look at the referring URL, and if it ends in ".jsp", they redirect back to that page upon a successful completion, passing through selected parameters and adding a "message" parameter to display the status of the operation. See comments in the code for details about these parameters.

Access Control: All services except updateUser require superuser privilege. Since a user may update his/herself (except for roles) that is also allowed by the back-end service.

The services are as follows – red marks required args
Update User Account
/repository/admin/updateUser
Method: POST
Args:
username=text - account name to change, only not required for create
only_password=(boolean) - do not process changes to user metadata, just update password
password - new password
password_confirm - must match new password
old_password - current (old) password, required for non-admin user
first - first name of person (metadata)
last - last name of person (metadata)
mailbox - email address of person
role - URI of role to add to user - may be multiple; when specified, this list of roles replaces the user's current roles.

If current user is not Administrator, there are these restrictions:

Cannot create new users
Can only change your own user account
Must supply old password to change password
Cannot change roles

Update Role
/repository/admin/updateRole
Method: POST
Args:
action= create | update | delete (keyword)
uri=URI - required except for create, identifies the role
label= text - immediate label of role
comment= text - lenghty description of the role