Autonomous Identity 2021.8.7

Stopping and Starting

The following commands are for Linux distributions.

Stopping Docker

  1. Stop docker. This will shutdown all of the containers.

    $ sudo systemctl stop docker

Restarting Docker

  1. To restart docker, first set the docker to start on boot using the enable command.

    $ sudo systemctl enable docker
  2. To start docker, run the start command.

    $ sudo systemctl start docker

Shutting Down Cassandra

  1. On the deployer node, SSH to the target node.

  2. Check Cassandra status.

    Datacenter: datacenter1
    =======================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving —  Address      Load       Tokens       Owns (effective)  Host ID                               Rack
    UN  10.128.0.38  1.17 MiB   256          100.0%            d134e7f6-408e-43e5-bf8a-7adff055637a  rack1
  3. To stop Cassandra, find the process ID and run the kill command.

    $ pgrep -u autoid -f cassandra | xargs kill -9
  4. Check the status again.

    nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.

Re-Starting Cassandra

  1. On the deployer node, SSH to the target node.

  2. Restart Cassandra. When you see the No gossip backlog; proceeding message, hit Enter to continue.

    $ cassandra
    
    …​
    INFO  [main] 2020-11-10 17:22:49,306 Gossiper.java:1670 - Waiting for gossip to settle…​
    INFO  [main] 2020-11-10 17:22:57,307 Gossiper.java:1701 - No gossip backlog; proceeding
  3. Check the status of Cassandra. You should see that it is in UN status ("Up" and "Normal").

    $ nodetool status

Shutting Down MongoDB

  1. Check the status of the MongDB

    $ ps -ef | grep mongod
  2. Connect to the Mongo shell.

    $ mongo --tls --tlsCAFile /opt/autoid/mongo/certs/rootCA.pem --tlsCertificateKeyFile /opt/autoid/mongo/certs/mongodb.pem
        --tlsAllowInvalidHostnames --host <ip-address>
    
    MongoDB shell version v4.2.9
    connecting to: mongodb://<ip-address>:27017/?compressors=disabled&gssapiServiceName=mongodb
    2020-10-08T18:46:23.285+0000 W  NETWORK  [js] The server certificate does not match the host name. Hostname: <ip-address> does not match CN: mongonode
    Implicit session: session { "id" : UUID("22c0123-30e3-4dc9-9d16-5ec310e1ew7b") }
    MongoDB server version: 4.2.9
  3. Switch the admin table.

    > use admin
    
    switched to db admin
  4. Authenticate using the password set in vault.yml file.

    > db.auth("root", "Welcome123")
    
    1
  5. Start the shutdown process.

    > db.shutdownServer()
    
    2020-10-08T18:47:06.396+0000 I  NETWORK  [js] DBClientConnection failed to receive message from <ip-address>:27017 - SocketException: short read
    server should be down…​
    2020-10-08T18:47:06.399+0000 I  NETWORK  [js] trying reconnect to <ip-address>:27017 failed
    2020-10-08T18:47:06.399+0000 I  NETWORK  [js] reconnect <ip-address>:27017 failed
  6. Exit the mongo shell.

    $ quit()
    or <Ctrl-C>
  7. Check the status of the MongDB

    $ ps -ef | grep mongod
    
    no instance of mongod found

Re-Starting MongoDB

  1. Re-start the MongoDB service.

    $ /usr/bin/mongod --config /opt/autoid/mongo/mongo.conf
    
    about to fork child process, waiting until server is ready for connections.
    forked process: 31227
    child process started successfully, parent exiting
  2. Check the status of the MongDB

    $ ps -ef | grep mongod
    
    autoid    9245     1  0 18:48 ?        00:00:45 /usr/bin/mongod --config /opt/autoid/mongo/mongo.conf
    autoid   22003  6037  0 21:12 pts/1    00:00:00 grep --color=auto mongod

Shutting Down Spark

  1. On the deployer node, SSH to the target node.

  2. Check Spark status. You should see that it is up-and-running.

  3. Stop the Spark Master and workers.

    $ /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/sbin/stop-all.sh
    
    localhost: stopping org.apache.spark.deploy.worker.Worker
    stopping org.apache.spark.deploy.master.Master
  4. Check the Spark status again. You should see: Unable to retrieve htp://localhost:8080: Connection refused.

Re-Starting Spark

  1. On the deployer node, SSH to the target node.

  2. Start the Spark Master and workers. Enter the user password on the target node when prompted.

    $ /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/sbin/start-all.sh
    
    starting org.apache.spark.deploy.master.Master, logging to /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/logs/spark-a
    utoid-org.apache.spark.deploy.master.Master-1.out
    autoid-2 password:
    localhost: starting org.apache.spark.deploy.worker.Worker, logging to /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/l
    ogs/spark-autoid-org.apache.spark.deploy.worker.Worker-1.out
  3. Check the Spark status again. You should see that it is up-and-running.

Copyright © 2010-2022 ForgeRock, all rights reserved.