Server Maintenance
Autonomous Identity administrators must conduct various tasks to maintain the service for their users.
The following are basic server maintenance tasks that may occur:
Stopping and Starting
You can run the following command to stop or start Autonomous Identity components:
Docker
-
Stop docker. This will shutdown all of the containers.
$ sudo systemctl stop docker
-
To restart docker, first set the docker to start on boot using the
enable
command:$ sudo systemctl enable docker
-
To start docker, run the
start
command:$ sudo systemctl start docker
-
After restarting Docker, restart the JAS service to ensure the service can write to its logs:
$ docker service update --force jas_jasnode
Cassandra
-
On the deployer node, SSH to the target node.
-
Check Cassandra status.
Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving — Address Load Tokens Owns (effective) Host ID Rack UN 10.128.0.38 1.17 MiB 256 100.0% d134e7f6-408e-43e5-bf8a-7adff055637a rack1
-
To stop Cassandra, find the process ID and run the kill command.
$ pgrep -u autoid -f cassandra | xargs kill -9
-
Check the status again.
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.
-
On the deployer node, SSH to the target node.
-
Restart Cassandra. When you see the
No gossip backlog; proceeding
message, hitEnter
to continue.$ cassandra … INFO [main] 2020-11-10 17:22:49,306 Gossiper.java:1670 - Waiting for gossip to settle… INFO [main] 2020-11-10 17:22:57,307 Gossiper.java:1701 - No gossip backlog; proceeding
-
Check the status of Cassandra. You should see that it is in
UN
status ("Up" and "Normal").$ nodetool status
MongoDB
-
Check the status of the MongDB
$ ps -ef | grep mongod
-
Connect to the Mongo shell.
$ mongo --tls --tlsCAFile /opt/autoid/mongo/certs/rootCA.pem --tlsCertificateKeyFile /opt/autoid/mongo/certs/mongodb.pem --tlsAllowInvalidHostnames --host <ip-address> MongoDB shell version v4.2.9 connecting to: mongodb://<ip-address>:27017/?compressors=disabled&gssapiServiceName=mongodb 2020-10-08T18:46:23.285+0000 W NETWORK [js] The server certificate does not match the hostname. Hostname: <ip-address> does not match CN: mongonode Implicit session: session { "id" : UUID("22c0123-30e3-4dc9-9d16-5ec310e1ew7b") } MongoDB server version: 4.2.9
-
Switch the admin table.
> use admin switched to db admin
-
Authenticate using the password set in
vault.yml
file.> db.auth("root", "Welcome123") 1
-
Start the shutdown process.
> db.shutdownServer() 2020-10-08T18:47:06.396+0000 I NETWORK [js] DBClientConnection failed to receive message from <ip-address>:27017 - SocketException: short read server should be down… 2020-10-08T18:47:06.399+0000 I NETWORK [js] trying reconnect to <ip-address>:27017 failed 2020-10-08T18:47:06.399+0000 I NETWORK [js] reconnect <ip-address>:27017 failed
-
Exit the mongo shell.
$ quit() or <Ctrl-C>
-
Check the status of the MongDB
$ ps -ef | grep mongod no instance of mongod found
-
Re-start the MongoDB service.
$ /usr/bin/mongod --config /opt/autoid/mongo/mongo.conf about to fork child process, waiting until server is ready for connections. forked process: 31227 child process started successfully, parent exiting
-
Check the status of the MongDB
$ ps -ef | grep mongod autoid 9245 1 0 18:48 ? 00:00:45 /usr/bin/mongod --config /opt/autoid/mongo/mongo.conf autoid 22003 6037 0 21:12 pts/1 00:00:00 grep --color=auto mongod
Apache Spark
-
On the deployer node, SSH to the target node.
-
Check Spark status. You should see that it is up-and-running.
$ elinks http://localhost:8080
-
Stop the Spark Master and workers.
$ /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/sbin/stop-all.sh localhost: stopping org.apache.spark.deploy.worker.Worker stopping org.apache.spark.deploy.master.Master
-
Check the Spark status again. You should see:
Unable to retrieve htp://localhost:8080: Connection refused
.
-
On the deployer node, SSH to the target node.
-
Start the Spark Master and workers. Enter the user password on the target node when prompted.
$ /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/sbin/start-all.sh starting org.apache.spark.deploy.master.Master, logging to /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/logs/spark-a utoid-org.apache.spark.deploy.master.Master-1.out autoid-2 password: localhost: starting org.apache.spark.deploy.worker.Worker, logging to /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/l ogs/spark-autoid-org.apache.spark.deploy.worker.Worker-1.out
-
Check the Spark status again. You should see that it is up-and-running.
Apache Livy
Apache Livy lets you manage Spark Clusters context management using a simple REST interface.
-
Stop Livy.
$ /opt/autoid/apache-livy/apache-livy-080-incubating-SNAPSHOT-bin/bin/livy-server stop
-
Start Livy.
$ /opt/autoid/apache-livy/apache-livy-080-incubating-SNAPSHOT-bin/bin/livy-server start
Accessing Log Files
Autonomous Identity provides different log files to monitor or troubleshoot your system.
Getting Docker Container Information
-
On the target node, get system wide information about the Docker deployment. The information shows the number of containers running, paused, and stopped containers as well as other information about the deployment.
$ docker info
-
If you want to get debug information, use the
-D
option. The option specifies that all docker commands will output additional debug information.$ docker -D info
-
Get information on all of your containers on your system.
$ docker ps -a
-
Get information on the docker images on your system.
$ docker images
-
Get docker service information on your system.
$ docker service ls
-
Get docker the logs for a service.
$ docker service logs <service-name>
For example, to see the nginx service:
$ docker service logs nginx_nginx
Other useful arguments:
-
--details
. Show extra details. -
--follow, -f
. Follow log output. The command will stream new output from STDOUT and STDERR. -
--no-trunc
. Do not truncate output. -
--tail {n|all}
. Show the number of lines from the end of log files, wheren
is the number of lines orall
for all lines. -
--timestamps, -t
. Show timestamps.
-
Getting Cassandra Logs
The Apache Cassandra output log is kicked off at startup. Autonomous Identity pipes the output to a log file in the directory, /opt/autoid/
.
-
On the target node, get the log file for the Cassandra install.
$ cat /opt/autoid/cassandra/installcassandra.log
-
Get startup information. Cassandra writes to
cassandra.out
at startup.$ cat /opt/autoid/cassandra.out
-
Get the general Cassandra log file.
$ cat /opt/autoid/apache-cassandra-3.11.2/logs/system.log
By default, the log level is set to
INFO
. You can change the log level by editing the/opt/autoid/apache-cassandra-3.11.2/conf/logback.xml
file. After any edits, the change will take effect immediately. No restart is necessary. The log levels from most to least verbose are as follows:-
TRACE
-
DEBUG
-
INFO
-
WARN
-
ERROR
-
FATAL
-
-
Get the JVM garbage collector logs.
$ cat /opt/autoid/apache-cassandra-3.11.2/logs/gc.log.<number>.current
For example:
$ cat /opt/autoid/apache-cassandra-3.11.2/logs/gc.log.0.current
The output is configured in the
/opt/autoid/apache-cassandra-3.11.2/conf/cassandra-env.sh
file. Add the following JVM properties to enable them:-
JVM_OPTS="$JVM_OPTS -XX:+PrintGCDetails"
-
JVM_OPTS="$JVM_OPTS -XX:+PrintGCDateStamps"
-
JVM_OPTS="$JVM_OPTS -XX:+PrintHeapAtGC"
-
JVM_OPTS="$JVM_OPTS -XX:+PrintGCApplicationStoppedTime"
-
-
Get the debug log.
$ cat /opt/autoid/apache-cassandra-3.11.2/logs/debug.log
Other Useful Cassandra Monitoring Tools and Files
Apache Cassandra has other useful monitoring tools that you can use to observe or diagnose and issue. To see the complete list of options, see the Apache Cassandra documentation.
-
View statistics for a cluster, such as IP address, load, number of tokens,
$ /opt/autoid/apache-cassandra-3.11.2/bin/nodetool status
-
View statistics for a node, such as uptime, load, key cache hit, rate, and other information.
$ /opt/autoid/apache-cassandra-3.11.2/bin/nodetool info
-
View the Cassandra configuration file to determine how properties are pre-set.
$ cat /opt/autoid/apache-cassandra-3.11.2/conf/cassandra.yaml
Apache Spark Logs
Apache Spark provides several ways to monitor the server after an analytics run.
-
To get an overall status of the Spark server, point your browser to
http://<spark-master-ip>:8080
. -
Print the logging message sent to the output file during an analytics run.
$ cat /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/logs/<file-name>
For example:
$ cat /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/logs/spark-org.apache.spark.deploy.master.Master-1-autonomous-id-test.out
-
Print the data logs that were written during an analytics run.
$ cat /data/log/files/<filename>
For example:
$ cat /data/log/files/f6c0870e-5782-441e-b145-b0e662f05f79.log
Updating IDP Certificates
When an IDP provider changes their certificates, you can update these certificates in Autonomous Identity.
-
SSH to the target machine.
-
View the current certificate settings in the
docker-compose.yml
file, and locate theNODE_EXTRA_CA_CERTS
property$ vi
/opt/autoid/res/api/docker-compose.yml
NODE_EXTRA_CA_CERTS=/opt/app/cert/<customer ID>-sso.pem -
Access your IDP URL, and export the
.cer
file from the browser. -
Convert the
.cer
file into a.pem
file using the following command:$ openssl x509 -in certificatename.cer -outform PEM -out certificatename.pem
-
Open the
docker-compose.yml
file and update theNODE_EXTRA_CA_CERTS
property.$ vi
/opt/autoid/res/api/docker-compose.yml
NODE_EXTRA_CA_CERTS=/opt/app/cert/<new-cert>.pem -
Restart the Autonomous Identity services.
$ docker stack rm api $ docker stack deploy --with-registry-auth --compose-file $ /opt/autoid/res/api/docker-compose.yml api $ docker service update --force ui_zoran-ui $ docker service update --force nginx_nginx
Changing the Cassandra zoran_dba and zoran_user passwords
During deployment, Autonomous Identity creates two user accounts to interact with the Cassandra database: zoran_dba and
zoran_user. The zoran_dba is an administrator or superuser account used by Autonomous Identity to set up the Cassandra
database. The zoran_user is a non-admin account used to log in to the Cassandra command-line interface, cqlsh
.
You can change the passwords after deploying Autonomous Identity using cqlsh
.
-
Access cqlsh.
$ cqlsh -u zoran_dba -p admin_password Connected to Zoran Cluster at <server-ip>:9042. [cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. zoran_dba@cqlsh>
-
In cqlsh, change the
zoran_dba
password:zoran_dba@cqlsh>ALTER USER zoran_dba WITH PASSWORD 'new_admin_password'; zoran_dba@cqlsh>exit
-
Use a text editor and change the environment variables in the
/opt/autoid/res/jas/docker-compose.yml
file:- CASSANDRA_DB_PASSWORD=new_admin_password
-
Remove the running container and redeploy it:
$ docker stack rm jas $ **docker stack deploy --with-registry-auth --compose-file /opt/autoid/res/jas/docker-compose.yml jas
-
Update the
zoran_user
password:zoran_dba@cqlsh>ALTER USER zoran_user WITH PASSWORD 'new_user_password'; zoran_dba@cqlsh>exit
Change MongoDB password post-deployment
You can update the MongoDB password by running the following steps on a running instance of MongoDB.
Update the various parameters for host IP and current root password as pertains to your environment.
Also, the |
-
Open the MongoDB shell. Use your host IP and root password:
mongo admin --host 10.10.10.10 --tls \ --tlsCertificateKeyFile /opt/autoid/certs/mongo/mongodb.pem \ --tlsCAFile /opt/autoid/certs/mongo/rootCA.pem \ --tlsAllowInvalidHostnames \ --username root \ --password 'current_root_password'
-
On the MongoDB shell, run the
changeUserPassword
command:db.changeUserPassword("mongoadmin","new_password")
-
Update the password as an environment variable in the JAS service. Update the following variable in the
/opt/autoid/res/jas/docker-compose.yml
file:- MONGO_ROOT_PASSWORD=new_password
-
Delete the currently running JAS container and redeploy:
docker stack rm jas docker stack deploy \ --with-registry-auth \ --compose-file /opt/autoid/res/jas/docker-compose.yml jas
-
Check that there are no stack errors in the container logs. The logs should show successful connections to MongoDB:
2022-11-21 19:07:40, 257 INFO c.m.d.l.SLF4JLogger [cluster-ClusterId{value='637bcc764cb8670d06c2feb8',description='null'}-10.10.10.:27017] Opened connection [connectionId{localValue:2, serverValue:30}] to 10.10.10.10.:27017 2022-11-21 19:07:40, 257 INFO c.m.d.l.SLF4JLogger [cluster-rtt-ClusterId{value='637bcc764cb8670d06c2feb8',description='null'}-10.10.10.:27017] Opened connection [connectionId{localValue:1, serverValue:31}] to 10.10.10.10.:27017 2022-11-21 19:07:40, 257 INFO c.m.d.l.SLF4JLogger [cluster-rtt-ClusterId{value='637bcc764cb8670d06c2feb8',description='null'}-10.10.10.:27017] Monitor thread successfully connected to server with description ServerDescription{address=10.10.10.10:27017, type=STANDALONE, State=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=9, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=221098137} 2022-11-21 19:07:45, 383 INFO c.m.d.l.SLF4JLogger [main] Opened connection [connectionId{localValue:3, serverValue:32}] to 10.10.10.10.:27017