The documentation is open-source and user-friendly. Feel free to contribute by clicking on "suggest edit" on any page. Let’s log!
The syslog-ng OSE application uses a regular expression to detect credit card numbers, and provides two ways to accomplish this: - `credit-card-mask(value("<message-field-to-process>"))` Process the specified message field (by default, ${MESSAGE}), and replace the 7-12th character of any credit card numbers (Primary Account Number or PAN) with asterisks (*). For example, syslog-ng OSE replaces the number 5542043004559005 with 554204******9005. - `credit-card-hash(value("<message-field-to-process>"))` Process the specified message field (by default, ${MESSAGE}), and replace any credit card numbers (Primary Account Number or PAN) with its 16-character-long SHA-1 hash. **Usage:** ``` @include "scl/rewrite/cc-mask.conf" rewrite { credit-card-mask(value("<message-field-to-process>")); }; ``` By default, these rewrite rules process the MESSAGE part of the log message.
Posted by Pierre Guceski 3 years ago
If you want to centralize all your logs from several **originator servers** to a **collector server** before sending them to [logmatic.io](http://logmatic.io/), here is what you need to do: ##On the **Collector server** Add an input module into your collector server *nxlog.conf* file to listen to any tcp connections on a choosen port: ``` <Input input_collector> Module im_tcp Host 0.0.0.0 Port <your_choosen_port> </Input> ``` **The host field:** - This specifies the IP address or a DNS hostname which the module should listen on **to accept connections**. Because of security reasons the default listen address is localhost if this directive is not specified (the localhost loopback address is not accessible from the outside). You will most probably want to send logs from remote hosts, so make sure that the address specified here is accessible. **The any address 0.0.0.0 is commonly used here** Then edit your Route module of your collector server *nxlog.conf* file to forward all your log entries to logmatic.io: ``` ############ ROUTES TO CHOOSE ##### <Route 1> Path syslog,input_collector => out </Route> ``` ##On the **Originator server** Change the output module of your originator *nxlog.conf* file into: ``` <Output out> Module om_tcp Host <your_collector_server> Port <your_choosen_port> </Output> ``` **The host field:** This specifies the IP address or a DNS hostname to which the module should send the log entries
Posted by Pierre Guceski 3 years ago
To replace a part of the log message with Syslog-ng, you have to: - Define a string or regular expression to find the text to replace. - Define a string to replace the original text (macros work as well). - Select the field of the message that the rewrite rule should process. You can rewrite the structured-data fields of messages complying to the [RFC5424](https://tools.ietf.org/html/rfc5424) message format. **Substitution rules use the following syntax:** ``` rewrite <name_of_the_rule> { subst("<string or regular expression to find>", "<replacement string>", value(<field name>), flags() ); }; ``` The `type()` and `flags()` options are optional: - `type()` specifies the type of regular expression to use - `flags()` are the [flags](http://doc.logmatic.io/discuss/568cd56313c5ad0d00b34ea3) of the regular expressions. **The following example replace every occurence of IP in the text of the message with the string IP-Address:** ``` rewrite r_rewrite_subst{ subst("IP", "IP-Address", value("MESSAGE"), flags("global")); }; ``` A single substitution rule can include multiple substitutions that are applied sequentially to the message. Note that rewriting rules must be included in the log statement to have any effect. **The following rules replace the first occurrence of the string IP with the string IP-Addresses:** ``` rewrite r_rewrite_subst{ subst("IP", "IP-Address", value("MESSAGE")); subst("Address", "Addresses", value("MESSAGE")); }; ```
Posted by Pierre Guceski 3 years ago
To have some influence on the rate limiting we have basically two options: ``` $SystemLogRateLimitInterval [number] $SystemLogRateLimitBurst [number] ``` The SystemLogRateLimitInterval determines the amount of time that is being measured for rate limiting. By default this is set to 5 seconds. The SystemLogRateLimitBurst defines the amount of messages, that have to occur in the time limit of SystemLogRateLimitInterval, to trigger rate limiting. Here, the default is 200 messages. To change these settings, open the rsyslog configuration: ``` vi /etc/rsyslog.conf ``` Then search the right spot for the entries, find the following: ``` $ModLoad imuxsock.so ``` Now insert two new lines under the ModLoad command and fill them as follows: ``` $SystemLogRateLimitInterval 2 $SystemLogRateLimitBurst 50 ``` This means in plain words, that rate limiting will take effect if more than **50 messages** occur in **2 seconds**.
Posted by Pierre Guceski 3 years ago
You can add more context by using Syslog's structured-data, which is natively understood by **Logmatic.io**. To do so, you should create a [conditional rewrite](https://www.balabit.com/sites/default/files/documents/syslog-ng-ose-latest-guides/en/syslog-ng-ose-guide-admin/html/conditional-rewrite.html) operator as illustrated below: ``` ... #Define the "enrich_app" rewrite operator rewrite enrich_app{ set("PROD", #Assign "PROD" as the env value value(".SDATA.enrich.env") #Create the "env" parameter in the "enrich" SD-ELEMENT (you can use whichever one you want...) condition(program("my_app"))); #Apply this rewrite to the app "myapp" }; #Don't forget to wire it into your log path log { source(s_src); rewrite(enrich_app); destination(d_logmatic); }; ... ``` By doing this, you will see your "env" parameter assigned to everything that arrive in **Logmatic.io** . Use this method to enrich your sources as you want.
Posted by Pierre Guceski 3 years ago
You should check if your file is rotated by a `logrotate` script with a `copytruncate` option. Indeed, as Rsyslog follows the activity of your file thanks to its own cursor: if you truncate the file, Rsyslog wants to read far away in a file that has been emptied. To solve this, you have to know that Rsyslog cursors are usually stored into `/var/spool/rsyslog/<input_state_file>` file. The idea is to delete targeted cursors at each rotation in order to force Rsyslog to restart from the beginning. Find below an example of such modification over a logrotate script: ``` /var/log/myapp/* { copytruncate compress daily rotate 7 notifempty missingok lastaction service rsyslog stop rm /var/spool/rsyslog/MyApp-* service rsyslog start endscript } ``` Of course the `rm /var/spool/rsyslog/MyApp-*` has to correspond to the **InputFileStateFile** you have used in your Rsyslog configuration. Please see the [watching your files section](http://doc.logmatic.io/docs/logging-from-linux#section-watching-your-own-files).
Posted by Pierre Guceski 3 years ago
First of all, we advise you to get later versions of Rsyslog. By checking the [changelog](http://www.rsyslog.com/downloads/download-other/), the TLS TCP client seem to have several serious bug issues that have been fixed. This problem actually arises because **Logmatic.io** will cut any TCP connection after 2 minutes of inactivaty. For some reason, there are Rsyslog versions that are not able to properly reconnect properly when necessary. To mitigate this issue we propose to use time markers so the connection never stops. To do so, add the following 2 lines in your Rsyslog configuration: ``` $ModLoad immark $MarkMessagePeriod 45 ``` And don't forget to restart: ``` sudo service rsyslog restart ```
Posted by Pierre Guceski 3 years ago
You can slightly extend the template defined above so you define the %procid% and event the %msgid% which is normally parts of the RFC-5424 format. We decided to not propose this extended format as default since some versions of RSyslog do not properly replace the process ID in events. The former makes it impossible for **Logmatic.io** to properly parse it. So, if you use this template, just make sure that your log entries in **Logmatic.io** are still properly identified as syslogs: ``` $template LogmaticFormat,"<your_api_key> <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% - %msg%\n" *.* @@api.logmatic.io:10514;LogmaticFormat ```
Posted by Pierre Guceski 3 years ago
Yes, some applications don't define their app-name properly when they log. For this reason, you may sometimes have a problem with the [RFC-5424](http://tools.ietf.org/html/rfc5424) defined above. Our first advise is to check the application and try to force it to log a proper app-name. If this isn't possible, you can fix this issue by replacing the previously defined format with the following one: ``` # ensure that the appname is not empty if strlen($app-name) == 0 then { set $!new-appname = "-"; } else { set $!new-appname = $app-name; } $template LogmaticFormat,"{{ api_key }} <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %$!new-appname% %procid% %msgid% - %msg%\n" *.* @@{{ api_host }}:10514;LogmaticFormat ```
Posted by Pierre Guceski 3 years ago
If you need to add meta information to all your log events from a specific stream or machine, you should use [RFC-5424's SD-PARAMS](https://tools.ietf.org/html/rfc5424#section-6.3). **Logmatic.io** recognizes and parses them automatically. To do this you should change the provided template provided above and replace the "-" (which means empty...) with params, as illustrated here: ``` $template LogmaticFormat,"<your_api_key> <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% - - [metas env=\"PROD\" attr2=\"value2\"] %msg%\n" ``` If you do this, all the your log events in *Logmatic.io* should come with the two defined attributes properly parsed and extracted as shown here: ``` { "custom": { "env": "PROD", "attr2": "value2", "message": "...", ... }, "syslog":{ ... } } ```
Posted by Pierre Guceski 3 years ago
The S3 log streamer has been built to follow logical log files of a single directory and then potentially of a single service. However, you can launch multiple log streamers side-by-side, simply by changing the name of their state file: ``` > ... STATE_FILE=<state_file1> node index.js > ... STATE_FILE=<state_file2> node index.js etc... ```
Posted by Pierre Guceski 3 years ago