In this short blog post I will show you how you can upload log files of any service be it Apache, JBOSS, nginx or any other service running in your EC2 instance to S3.

If you will ask me what’s the use of all this. I will say there are many advantages of doing this. One of them is, suppose you have a service running in EC2 and for some reason you would like to run this service on the new EC2 instance and terminate the old one. There is, however, a big drawback of this approach – you will lose your service’s log files, which you might want to use for troubleshooting and analytics.

For carrying out the below tasks make sure that aws-cli is installed and configured for your system. And gzip is installed so as to compress the logs before sending them to S3.

For sake of showing how this works I have used Apache running in Ubuntu Instance. You can do this using any service based on your requirements.

One simple approach that I will show you is by using logrotate utility which comes by default in Linux based systems. Now you might be asking what is logrotate. For those who don’t know about it, here’s a simple explanation.

Logrotate is a utility designed for administrators who manage servers producing a high volume of log files to help them save some disk space as well as to avoid a potential risk making a system unresponsive due to the lack of disk space. Logrotate provides an ability for a system administrator to systematically rotate and archive any log files produced by the system and thus reducing a operating system’s disk space requirement. By default logrotate is invoked once a day using a cron scheduler from location /etc/cron.daily/

To learn more about logrotate click here.

To create a logrotate hook is very simple. Just add the postrotate/endscript directive with the script or commands you want to run. You’d also want to add sharedscripts in order to make sure the script is only executed once per rotation.

The logrotate file (/etc/logrotate.d/apache2) for my service looks like this:

/var/log/apache2/*.log {
daily
missingok
copytruncate
rotate 10
compress
delaycompress
notifempty
sharedscripts
postrotate
/bin/bash /etc/apache2/upload_log_to_s3.sh log.1
endscript
}

Finally I just needed the script (/etc/apache2/upload_log_to_s3.sh) for uploading a log file to S3:

#!/bin/bash
log_file_ext=$1
gzip -c /var/log/apache2/*.$log_file_ext > /tmp/log.gz
aws s3 cp /tmp/log.gz s3://apache2/logs/`date +%Y-%m-%dT%H:%M:%SZ`.log.gz

Be sure to change the S3 bucket name as it should be unique.

The upload script will just gzip the log file (needed as I’m using delaycompress), rename the log file to the current timestamp, and upload the file using aws-cli. The argument sets the file extension of the log file, which is necessary to be able to upload both the current (.log) as well as the previous log file (.log.1).

As you can see in the screenshot below, the logs will look something like this.

Hope this article was helpful. If you have any questions or doubts feel free to ask in the comments section below.

Leave a comment

Your email address will not be published. Required fields are marked *