Stopping and Starting
The following commands are for Linux distributions.
Stopping Docker
Stop docker. This will shutdown all of the containers.
$
sudo systemctl stop docker
Re-Starting Docker
To restart docker, first set the docker to start on boot using the enable command.
$
sudo systemctl enable docker
To start docker, run the start command.
$
sudo systemctl start docker
Shutting Down Cassandra
On the deployer node, SSH to the target node.
Check Cassandra status.
Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.128.0.38 1.17 MiB 256 100.0% d134e7f6-408e-43e5-bf8a-7adff055637a rack1
To stop Cassandra, find the process ID and run the kill command.
$
pgrep -u autoid -f cassandra | xargs kill -9
Check the status again.
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.
Re-Starting Cassandra
On the deployer node, SSH to the target node.
Restart Cassandra. When you see the
No gossip backlog; proceeding
message, hit Enter to continue.$
cassandra
... INFO [main] 2020-11-10 17:22:49,306 Gossiper.java:1670 - Waiting for gossip to settle... INFO [main] 2020-11-10 17:22:57,307 Gossiper.java:1701 - No gossip backlog; proceeding
Check the status of Cassandra. You should see that it is in
UN
status ("Up" and "Normal").$
nodetool status
Shutting Down Spark
On the deployer node, SSH to the target node.
Check Spark status. You should see that it is up-and-running.
$
elinks http://localhost:8080
Stop the Spark Master and workers.
$
/opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/sbin/stop-all.sh
localhost: stopping org.apache.spark.deploy.worker.Worker stopping org.apache.spark.deploy.master.Master
Check the Spark status again. You should see:
Unable to retrieve htp://localhost:8080: Connection refused
.
Re-Starting Spark
On the deployer node, SSH to the target node.
Start the Spark Master and workers. Enter the user password on the target node when prompted.
$
/opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/logs/spark-a utoid-org.apache.spark.deploy.master.Master-1.out autoid-2 password: localhost: starting org.apache.spark.deploy.worker.Worker, logging to /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/l ogs/spark-autoid-org.apache.spark.deploy.worker.Worker-1.out
Check the Spark status again. You should see that it is up-and-running.