Spring Boot 2.1 introduced log groups. A log group is a logical name for one or more loggers. We can define log groups in our application configuration. Then we can set the log level for a group, so all loggers in the group will get the same log level. This can be very useful to change a log level for multiple loggers that belong together with one setting. Spring Boot already provides two log groups by default: web and sql. In the following list we see which loggers are part of the default log groups:
- web
-
org.springframework.core.codec
, org.springframework.http
, org.springframework.web
, org.springframework.boot.actuate.endpoint.web
, org.springframework.boot.web.servlet.ServletContextInitializerBeans
- sql
-
org.springframework.jdbc.core
, org.hibernate.SQL
Continue reading →
To get an overview of all Gradle tasks in our project we need to run the tasks
task. Since Gradle 5.1 we can use the --group
option followed by a group name. Gradle will then show all tasks belonging to the group and not the other tasks in the project.
Suppose we have a Gradle Java project and want to show the tasks that belong to the build group:
Continue reading →
Since Java 9 we can specify that the Javadoc output must be generated in HTML 5 instead of the default HTML 4. We need to pass the option -html5
to the javadoc
tool. To do this in Gradle we must add the option to the javadoc
task configuration. We use the addBooleanOption
method of the options
property that is part of the javadoc
task. We set the argument to html5
and the value to true
.
In the following example we reconfigure the javadoc
task to make sure the generated Javadoc output is in HTML 5:
Continue reading →
One of the most important features in Gradle is the support for incremental tasks. Incremental tasks have input and output properties that can be checked by Gradle. When the values of the properties haven’t changed then the task can be marked as up to date by Gradle and it is not executed. This makes a build much faster. Input and output properties can be files, directories or plain object values. We can set a task input property with a date or date/time value to define when a task is up to date for a specific period. As long as the value of the input property hasn’t changed (and of course also the other input and output property values) Gradle will not rerun task and mark it as up to date. This is useful for example if a long running task (e.g. large integration test suite) only needs to run once a day or another period.
In the following example Gradle build file we define a new task Broadcast
that will get content from a remote URL and save it in a file. In our case we want to save the latest messages from SDKMAN!. If you don’t know SKDMAN! you should check it out!. The Broadcast
task has an incremental task output property, which is the output file of the task:
Continue reading →
In Micronaut we can inject configuration properties in different ways into our beans. We can use for example the @Value
annotation using a string value with a placeholder for the configuration property name. If we don’t want to use a placeholder we can also use the @Property
annotation and set the name
attribute to the configuration property name. We have to pay attention to the format of the configuration property name we use. If we refer to a configuration property name using @Value
or @Property
we must use lowercased and hyphen separated names (also known as kebab casing). Even if the name of the configuration property is camel cased in the configuration file. For example if we have a configuration property sample.theAnswer
in our application.properties
file, we must use the name sample.the-answer
to get the value.
In the following Spock specification we see how to use it in code. The specification defines two beans that use the @Value
and @Property
annotations and we see that we need to use kebab casing for the configuration property names, even though we use camel casing to set the configuration property values:
Continue reading →
I really like maven for the structured way it provides for defining and building a project. But sometimes I wish for a less verbose notation than the XML of the Project Object Model (POM). For example, gradles dependency notation is far shorter than mavens dependency declaration. Looking for a less verbose way to declare a maven POM, I discovered polyglot maven. It are maven extensions that allow the maven POM to eb written in another dialect than XML. Since you see YAML more and more I decided to try that dialect, and see if my maven descriptor would be clearer.
-
Create a directory to work in, {projectdir}
, and change into it.
-
To register the extensions for maven, create a file {projectdir}/.mvn/extensions.xml
and add the extension:
-
Now it’s possile to write the maven POM in YAML, {projectdir}/pom.yml
:
By using the yaml inline map. or dictionary notation declaring a dependency uses way less characters then when using XML.
Continue reading →
In my early days I spent most of my time fixing bugs on a huge enterprise application, so by now I learned from experience that a lot of bugs could have been easily prevented. This is why I prefer a Functional Programming style, I love how FP handles state. As a software consultant I get to switch companies and teams quite regularly and most projects I have been working on use java 7 or 8. This almost always leads to a few dicussions regarding programming style. So today I would like to talk about good FP principles, and how Java makes them hard (and why languages like Kotlin are awesome).
Most of my variables (95%+) are usually immutable, and I would like my compiler to check this for me. In Kotlin we have val
and var
to declare variables, val
being immutable and val
being mutable. To make a variable non-mutable in Java, we need to use the final
keyword before all variables, including parameters to get the behaviour I desire.
Continue reading →
Infrastructure automation basically is the process of scripting environments — from installing an OS to installing and configuring servers on instances.
It also includes configuring how the instances and software communicate with one another, and much more.
Automation allows you to redeploy your infrastructure or rebuild it from scratch, because you have a repeatable documented process.
It also allows you to scale the same configuration to a single node or to thousands of nodes.
In the past years, several open source and commercial tools have emerged to support infrastructure automation. These tools include Ansible, Chef, Terraform and Puppet. They support cloud platforms, but also virtual and physical environments. On Google Cloud Platform you have the possibility to use Cloud Deployment Manager. The Cloud Deployment Manager allows you to automate the configuration and deployment of your Google Cloud with parallel, repeatable deployments and template-driven configurations.
Continue reading →
Normally we would consume server-sent events (SSE) in a web browser, but we can also consume them in our code on the server. Micronaut has a low-level HTTP client with a SseClient
interface that we can use to get server-sent events. The interface has an eventStream
method with different arguments that return a Publisher
type of the Reactive Streams API. We can use the RxSseClient
interface to get back RxJava2 Flowable
return type instead of Publisher
type. We can also use Micronaut’s declarative HTTP client, which we define using the @Client
annotation, that supports server-sent events with the correct annotation attributes.
In our example we first create a controller in Micronaut to send out server-sent events. We must create method that returns a Publisher
type with Event
objects. These Event
objects can contains some attributes like id
and name
, but also the actual object we want to send:
Continue reading →
In the first part we setup a local Keycloak instance. In this blog we will see how we can leverage Keycloak to secure our frontend. For this purpose we will create a small Spring Boot application that will serve a webpage. The next and last blog will show how authentication can be used between services.
As mentioned we will create a small Spring Boot microservice and secure it using Spring Security and Keycloak. The service that we will create in this blog is the "frontend" Spring Service. It serves a simple web page that displays a hello message including the users email adres as registered in Keycloak. The next blog we will build the service and propagate the authorization from to frontend to service we cal. This way we build a complete Single Sign-On solution.
Continue reading →
gRPC is a high-performance RPC framework created by Google.
It runs on top of HTTP2 and defaults to the protocol buffers instead of JSON on the wire. As probably most developers,
I’ve also been creating microservices for the past several years.
These services usually communicate over HTTP with a JSON payload.
In this post I want to present gRPC as an alternative to REST when most of your API’s look like remote procedure calls.
In my time using REST API’s I have encountered many discussions about status codes (which do not map very well onto your application errors), API versioning, PUT vs PATCH, how to deal with optional fields, when to include things as query parameters etc.
This can lead to inconsistent API’s and requires clear documentation.
Continue reading →