
Both syslog and rsyslog are pre-installed on almost all Linux distributions.

#Docker syslog stdout stderr driver
We’ll setup ELK stack, use Docker syslog log driver and, finally, send all log entries to a central location with rsyslog. In this post we’ll concentrate on syslog and ways to centralize all logs to a single ELK instance. That might save HD usage when that is of importance but in most cases it is something hardly anyone will need. As a third option, it is also possible to completely suppress the writing of container output to file. If set, it will route container output (stdout and stderr) to syslog. In addition to the default json-file driver that allows us to see logs with docker logs command, we now have a choice to use syslog as an alternative. While it passed mostly unnoticed, it is a very cool capability and a huge step forward in creating a comprehensive approach to logging in Docker environments. In version 1.6 Docker introduced logging driver feature. There were other options but they were all hacks, difficult to set up or resource hungry solutions. This is especially true when a set of servers is treated as a data center and containers are deployed with orchestration tools like Docker Swarm or Kubernetes. Without any exposed volumes containers are much easier to reason with and move around different servers. Besides, exposing volumes is one of my least favourite things to do with Docker.

With Docker we should output logs only to stdout and the rest was supposed to be taken care of. However, the real problem was that there was not supposed to be a need to store logs in files. As an alternative we could use LogStash Forwarder and save a bit on server resources. From there on we could tell LogStash to monitor that directory and send log entries to ElasticSearch. We could expose container directory with logs as a volume. We had a couple of solutions but none of them seemed good enough. However, centralized logging with Docker on large-scale was not a trivial thing to do (until version 1.6 was released). Our favourite combination is ELK stack ( ElasticSearch, LogStash and Kibana). The solution is to use some kind of centralized logging. Not only that we have many containers but they are distributed across many servers. If we add distributed services the situation gets even worst. When that number starts moving towards tens or hundreds, individual logging is unpractical at best. Monitoring few or even ten containers individually is not hard. Monitoring logs for each container separately quickly becomes a nightmare. With Docker and ever more popular usage of micro services, number of deployed containers is increasing rapidly. When we need to inspect logs all we are supposed to do is run docker logs. We should output information to stdout/stderr and the rest will be taken care by Docker itself. With Docker there was not supposed to be a need to store logs in files.
