Release notes: September 9th - 2016


2 metrics leaderboards

Leaderboards are used to display ordered list of values. This is especially useful in clickable dashboards to rapidly filter all the other panes when a member looks interesting to investigate into. However, we thought that sometimes having a single metric could not be enough to quickly understand what goes on.

From now on you can pull another metric that is displayed aside the one used for sorting.

Committed and used memory heaps

Committed and used memory heaps


Real User Monitoring Javascript SDK: measure webpages performances

You can now measure your webpages performance easily with by using the power of Boomerang.js.

This is done thanks to logmatic-rum-js available on our github.

By just pulling the scripts and setting your api key you will measure on your static pages and single page applications:

  • The total duration to display the page from the first query
  • The rendering time
  • The network time
  • And the time spend to download all the assets (scripts, css, images, etc...) with a summary of the worst ones

You can also add you own custom timers to easily measures timing of your various components.

RUM integration

RUM integration

Please have a look into it and feel free to make suggestions!

.NET Serilog sink made available

Serilog SDK is one of the best logging SDK in .NET out there. With the sink available on Nugget.

Serilog integration

Serilog integration

Integration with empowers all developers and IT/Ops to continuously verify and improve their app’s
performance, throughout its lifecycle, by getting the right information at the right moment. is a great complement to Blackfire, as it can both log Blackfire data such as build reports history, and trigger Blackfire scenarios whenever an alert is raised. Basically, combining other log sources, this integration helps you automatically get more information on the performance of your code if an issue arises.

Parsing & Enrichment

Enforce the type of incoming attributes

When many log sources are streamed to this is sometimes difficult to control the types assigned to attributes.

For instance, let's say sometime the userId comes as the long 1234 and from another source you'll get it as "1234" (the string).
In that situation there was only one solution before: changing one of the 2 log sources in conflict and enforce a type.

From now on, you can directly solve this tricky issue in the's user interface. In the Parsing & Enrichment menu:

  • Click on the Typing tab
  • Add a new Typing rule
  • Define the attribute and the type you want to assign. does the rest.

Assign a type to an attribute

Assign a type to an attribute

A brand new URL parser

Parsing an URL is something you can do with a grok matching rule... but a bit difficult. That's where the new url core filter will help you!

To illustrate the usage. The following URL:!/super/hash

Is decomposed as follow by the core filter:

    "protocol": "https",
    "auth": {
      "username": "user",
      "password": "password"
    "host": "",
    "hostname": "",
    "port": 8080,
    "path": "/a/long/path/file.txt",
    "queryString": {
      "param1": "foo",
      "param2": "bar"
    "hash": "#!/super/hash"

Grok parser improvements: multiple matches creates array has its own implementation of Grok. Before this version when a rule matched multiple time with the same target attribute only the last value was assigned to it.

From now on multiple matches creates arrays of matched values. This is actually the normal behaviour suggested by the standard.

The following rule:

rule %{word:my_attr} WHATEVER %{word:my_attr}


hello WHATEVER world

Then results in:

   "truc": ["hello", "world"]