T-Racks - examples

Printer-friendly versionPrinter-friendly version
How to use the T-Racks framework

T-Racks is a simple nice tool and framework to support analysis of execution paths and measurements, specifically in a multithreading environment. We can analyze an applications execution path and state at runtime as usual by debugging that application, but sometimes that is a cumbersome task to do, depending on the applications architecure and depending on our tooling. In most cases debugging is the best choice, but when it comes to concurrency, specifically with reused threads from thread pools managed by a JEE web container or EJB container or the like, it could leave us with a headache and lots of time spent fruitless. Many developers tend to catch the fish implementing lots of logging statements like mass fishing rods. That is a way to analyze an execution path and state at individually choosen points, independent from the runtime, but it is mixed up with all the other entries in the logfiles and there is a tendency among developers to not remove these logging statements after the analysis, because to distinguish these statements from permanently important ones we have to review them individually. Some developers pollute the code with statements to denote the start and the end of each and every method in the entire application, but that's rather the job for a framework that applies cross-cutting behavior (as with AOP) or for a tool that uses agents and extra classloaders to instrument the classes (as with classical tracing tools used on a production or test server).

T-Racks aims at local developer tests and test environments primarily and to provide a quick and easy way to output or retrieve tracing data from individually choosen points. Quick and easy because the most important tracing data are extracted and provided automatically and because simple to use. Individually choosen points, because we want to analyze specific matters and individual suspects and pass selected values to the tracing. Most of the times we have a clue about where to analyze and therefore its less helpful to get masses of tracing data or log entries and having to search for the needle in the haystack.

T-Racks can output tracing data in whatever format, filtering and sorting we want (we can extend that), but out of the box it provides a StackTracer, QueueTracer and a XMLTracer. With that we can output the tracing data in a CSV format in different sequences and in chunks, as well as output in an XML format for a better overview of execution paths. T-Racks supports also to apply tracing levels (not to be confused with log levels). The output is usually exported to a dedicated file, not the log files, or print to the console, but we can also just get the tracing data from the tracer for our own further processing. In order to get the output complete with multiple threads, which is especially important for the XML format, the tracers provide operations to await for all threads for individual tracks or the entire tracing. This way it works also well with unit tests where the main thread terminates before the tracing is completed.


Let's look at some usage examples. For instance in a foobar unit test we write...

StackTracer tracer = new StackTracer();
tracer.setExportFile(new File(foobarTracingFile));
tracer.setCapacity(100, true, true);
tracer.awaitTotalCompletion(0, 100, TimeUnit.MILLISECONDS);

...to create a tracer that works on a stack, optionally give it a name to distinguish it later (we can employ several tracers at the same time), optionally set a file to output to, optionally set a capacity limit (to not run into memory issues) and indicate if a chunk shall be printed or exported before it is thrown away. Then we add the tracer to the Tracers and execute the unit to test (here Foobar). Eventually, with multiple threads, we have to wait with a delay and / or timeout for all threads before the main thread of the unit test should terminate.