# Test-case Pre-requisites: - Make sure kafka and lh-consumer are in same namespace. - Make sure Kafka, ElasticSearch and log-consumer pods are in running state. STEP 1: Create index for each environment in ElasticSearch $ kubectl exec -it -n logharbour pod/lh1-es-single-node-0 -- /bin/bash $ curl -X PUT "http://localhost:9200/kra-dev" $ curl -X PUT "http://localhost:9200/kra-qa" $ curl -X PUT "http://localhost:9200/kra-uat" STEP 2: Create data view in Kibana - Login to Kibana "http://13.201.78.242:30005/" - Create Data view for each environment in Kibana. STEP 3: Do the necessary changes in the lh-producer.yaml file and run the log-producer by following command, below command will generate random log and push to the log consumer using kafka broker and then same log we can see using curl command and then in kibana data view. $ kubectl apply -f lh-producer.yaml -n logharbour STEP 4: To check the logs are stored in ElasticSearch run following commands $ kubectl exec -it -n logharbour pod/lh1-es-single-node-0 -- /bin/bash $ curl -X GET "http://localhost:9200//_count" -H 'Content-Type: application/json' STEP 5: Check the logs in Kibana - Login to Kibana "http://13.201.78.242:30005/" using credentials. - Check the logs for each environment on 'Discover' tab.