Skip to content

[Startup MVP recipes #16] Nest.js Get Started with Elasticsearch – ELK Dev instance deployment and integration in Nest.js

Quick Intro

This deployment is just for a get-started single node deployment of ELK stack.

Target: AWS EC2 t3.medium for dev instance

Docker Compose ELK is based on https://github.com/deviantony/docker-elk/tree/tls

Setup

EC2

  • Setup EC2 using AWS console
  • t3.medium is good for dev instance
  • Configure Security Group based on readme on the elk docker repo (exposes corresponding ports)
  • Setup VPC + Elastic IP + DNS

Docker

  • docker-compose up --build -d
  • Regen TLS certificates following https://github.com/deviantony/docker-elk/blob/tls/tls/README.md
  • Restart docker and setup authentication following readme, Tips: use --url https://elasticsearch:9200 to bypass ip SSL validation error.
  • Set the generated passwords in .env and consider hide the env file from git
  • Login Kibana using elastic user, not kibana’s internal user

Now everything besides Kibana is secured over TLS and for Kibana we don’t want to use self-signed certs since it is accessed from Chrome user side.

The idea is to use Nginx to redirect and forward 80 → 443 → 5601 and finally turn off 5601 from AWS security group (firewall).

pointers

Docker Compose Changes

Config changes on top of original docker-elk repo:

version: "3.7"

services:
  # The 'setup' service runs a one-off script which initializes the
  # 'logstash_internal' and 'kibana_system' users inside Elasticsearch with the
  # values of the passwords defined in the '.env' file.
  #
  # This task is only performed during the *initial* startup of the stack. On all
  # subsequent runs, the service simply returns immediately, without performing
  # any modification to existing users.
  setup:
    container_name: setup
    build:
      context: setup/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    init: true
    volumes:
      - setup:/state:Z
      # (!) CA certificate. Generate using instructions from tls/README.md
      - ./tls/kibana/elasticsearch-ca.pem:/elasticsearch-ca.pem:ro,z
    environment:
      ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
      LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
      KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
    networks:
      - elk
    depends_on:
      - elasticsearch

  elasticsearch:
    restart: always
    container_name: elasticsearch
    build:
      context: elasticsearch/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    volumes:
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro,z
      - elasticsearch:/usr/share/elasticsearch/data:z
      # (!) TLS certificates. Generate using instructions from tls/README.md.
      - ./tls/elasticsearch/elasticsearch.p12:/usr/share/elasticsearch/config/elasticsearch.p12:ro,z
      - ./tls/elasticsearch/http.p12:/usr/share/elasticsearch/config/http.p12:ro,z
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: -Xms1g -Xmx1g
      # Bootstrap password.
      # Used to initialize the keystore during the initial startup of
      # Elasticsearch. Ignored on subsequent runs.
      ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
      # Use single node discovery in order to disable production mode and avoid bootstrap checks.
      # see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
      discovery.type: single-node
    networks:
      - elk

  logstash:
    restart: always
    container_name: logstash
    build:
      context: logstash/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro,Z
      - ./logstash/pipeline:/usr/share/logstash/pipeline:ro,Z
      # (!) CA certificate. Generate using instructions from tls/README.md
      - ./tls/kibana/elasticsearch-ca.pem:/usr/share/logstash/config/elasticsearch-ca.pem:ro,z
    ports:
      - "5044:5044"
      - "50000:50000/tcp"
      - "50000:50000/udp"
      - "9600:9600"
    environment:
      LS_JAVA_OPTS: -Xms512m -Xmx512m
      LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    restart: always
    container_name: kibana
    build:
      context: kibana/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    volumes:
      - ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro,Z
      # (!) TLS certificates. Generate using instructions from tls/README.md.
      - ./tls/kibana/elasticsearch-ca.pem:/usr/share/kibana/config/elasticsearch-ca.pem:ro,z
      - ./tls/kibana/instance.crt:/usr/share/kibana/config/instance.crt:ro,z
      - ./tls/kibana/instance.key:/usr/share/kibana/config/instance.key:ro,z
    ports:
      - "5601:5601"
    environment:
      KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
      SERVER_PUBLICBASEURL: https://ourdomain.com
    networks:
      - elk
    depends_on:
      - elasticsearch

networks:
  elk:
    driver: bridge

volumes:
  setup:
  elasticsearch:

Nest.js Connection

Connect Nest.js’ ES client to this dev instance:

Convert ca.p12 to pem:

Convert pem to base64 and save it in env var

  • cat ca.crt.pem | base64

To load it in Nest.js use following config:

{
  node: process.env.ELASTIC_URL,
  tls: {
    ca: Buffer.from(process.env.ELASTIC_CA, 'base64'),
  },
  auth: {
    username: process.env.ELASTIC_USERNAME,
    password: process.env.ELASTIC_PASSWORD,
  },
}

Leave a Reply

Your email address will not be published. Required fields are marked *