JDriven Blog

SonarCloud GitHub Pull Request Analysis from Jenkins for Java/Maven projects

Posted on by  
Tim te Beek

SonarCloud is a code quality tool that can identify bugs and vulnerabilities in your code. This post will explore how to integrate SonarCloud, GitHub, Jenkins and Maven to report any new code quality issues on pull requests.

SonarCloud is the cloud based variant of SonarQube, freeing you from running and maintaining a server instance. Older (<7) SonarQube versions had a preview analysis mode to report any new issues in a branch on the associated pull request. In newer versions of SonarQube this functionality has moved to the paid version, or the SonarCloud offering.

Continue reading →

Automation and Measurement as first class citizens in your sprint backlog

Posted on by  
Jasper Bogers

When you start work on a product, your velocity may be low and not reflect the investment you need to make to have proper continuous delivery. Here’s an idea to make it visible.

When you build a soda factory, producing your first can of soda effectively costs as much as the entire factory. Of course you plan to produce a whole lot more, and distribute the cost over your planned production.

This is an analogy that’s worth considering when starting on a new product with your Scrum team. During the first few sprints of work on a product, a team is often busy setting up the delivery pipeline, test framework, local development environment, etc. All this work undeniably has value, but usually isn’t expressed as "product features".

For example: You have 20 similar functional user stories that would be an equal effort to implement. The first 2 sprints your functional burndown is low. This is because during sprint planning, whichever user story gets picked up first has the questionable honour of having subtasks such as "Arrange access to Browserstack", "Set up Jenkins", "Set up AWS account", "Set up OpsGenie for alerting" and "Set up Blazemeter for load test", to name a few.

Consider what the Scrum Guide says about a deliverable increment:

Incremental deliveries of "Done" product ensure a potentially useful version of working product is always available.

a "Done", useable, and potentially releasable product Increment is created

The Increment is the sum of all the Product Backlog items completed during a Sprint and the value of the increments of all previous Sprints. At the end of a Sprint, the new Increment must be "Done," which means it must be in useable condition and meet the Scrum Team’s definition of "Done". An increment is a body of inspectable, done work that supports empiricism at the end of the Sprint. The increment is a step toward a vision or goal. The increment must be in useable condition regardless of whether the Product Owner decides to release it.

Development Teams deliver an Increment of product functionality every Sprint. This Increment is useable, so a Product Owner may choose to immediately release it.

This is problematic because it means your first few sprints tell you little about your ability to deliver value given the manpower and knowledge at your disposal. Also, it may mean your first few sprints fail to deliver any functional increment that could go live. Because what you’ve decided constitutes value is different than what you’re investing in, it may feel like you’re forced to do necessary work without seeing measurable results. You have little to demo during your sprint reviews. Product owners get nervous the longer this takes. You’re destined to be off to a poor start.

See the following sprint backlog and resulting velocity chart. When you hide all the automation and measurement boilerplate work as subtasks underneath whichever user stories you pick up forst, your burndown charts give the impression you achieved very little.

"Fat" user stories with automation and measurement as boilerplating subtasks hidden behind user story velocity

This doesn’t seem fair.

Some resort to starting out with a "Sprint 0" of undefined length and without a sprint goal, to just get all the ramping up out of the way, as though it’s a necessary evil. Don’t do this. Focus on delivering value from the start.

Continue reading →

Publish your backend API typings as an NPM package

Posted on by  
Christophe Hesters
In this post I suggest a way to publish your backend API typings as an NPM package. Frontend projects using can then depend on these typings to gain compile time type safety and code completion when using Typescript.

It’s quite common in a microservice style architecture to provide a type-safe client library that other services can use to communicate with your service. This can be package with a Retrofit client published to nexus by the maintainer of the service. Some projects might also generate that code from a OpenAPI spec or a gRPC proto file.

However, when we expose some of these APIs to the frontend, we lose the types. In this post I suggest a way to publish your backend API types as an NPM package. The frontend can then depend on these typings. Using typescript you now have compile time type safety and code completion. To see a sneak peak, scroll to the end :).

Continue reading →

Effectively use MapStruct and Lombok's builder

Posted on by  
Willem Cheizoo

Since MapStruct version 1.3.0.Final is out, we are able to better integrate with Lombok Builder pattern. MapStruct is a library that takes away a lot of boilerplate code for mapping between POJO’s. With MapStruct there is no need for implementing the real mapping itself.

With Lombok we can use a Builder pattern and mark an object as a Value(Object). It will result in an immutable object. This blog post shows how we can use MapStruct to use the Builder pattern of Lombok.

Continue reading →

Kotlin Exposed - A lightweight SQL library

Posted on by  
Christophe Hesters

A big part of the fun of starting a new project is making a design and choosing an appropriate technology stack. If you are a Java developer and need to access a SQL database, a common choice is to use JPA with an ORM framework such as Hibernate. This adds a lot of complexity to your project for multiple reasons. In my experience, writing performing queries requires careful analysis. Writing custom queries is possible but more complex. For starters, JPQL/HQL queries are parsed at runtime and the criteria API is far from user friendly. Moreover, the extensive use of annotations makes it harder to quickly see how the database is structured.

Kotlin Exposed is a lightweight SQL library on top of JDBC that could serve as a good alternative. When using Kotlin Exposed you start by describing your database structure using plain Kotlin code. The code resembles SQL DDL statements very closely and does not require any annotations or reflection! You can use these descriptions to write type safe queries. These queries can be written in two flavors: DSL and/or DAO. This post focuses on the DSL flavor.

Continue reading →

Let's Play!

Posted on by  
Justus Brugman

After my last post, it would be a good time to do a bit more of a technical story. The goal will be to set up a Hello World application using the Play framework, a front-end based on Angular running on a Docker image, deployed in your local running Kubernetes!
So let’s play!

Play has been around for quite some time now. It was built by web developers to make it more easy to develop web applications using either Java or Scala. Play is reactive(1) by default, uses the MVC architecture(2) and is built on Akka(3). Akka can be described as ‘the implementation of the Actor Model(4) on the JVM’. Play is a lightweight, stateless framework that provides all components you need for web applications and REST services. It’s easy to scale both horizontally and vertically. The framework integrates a http server, CSRF protection and i18n support, supports Ebeans, JPA, Slick and does hot reloading of your code. This makes it easy to directly see the results of your work. Besides all that, it’s just FUN to use! For more information about Play, just visit their site.

Continue reading →

Containerization: Is it the solution that solves your DevOps issues?

Posted on by  
Justus Brugman

Nowadays you can’t walk into an IT department without hearing discussions about containerization. Should we move to OpenStack or OpenShift? Do we want to use Pivotal Cloud Foundry? What about Docker Swarm or Kubernetes? How to integrate our new kubernetes cluster into our CI/CD pipelines?

Keep in mind that DevOps using unmanaged infra adds an extra layer of complexity to the development teams. In the end, you might save money on an Ops team, but the tasks still need to be executed, leaving the work for the development team. As an example, when you create some new code, make a pull-request, the CI/CD pipeline kicks in to automatically build your code, perform (unit) tests, deploy to the next environment, etc. When an error occurs, the pull request would be rejected, leaving the developer to fix any possible issues, even infra.

Continue reading →

AWS accounts & users: Separation of Concerns

Posted on by  
Casper Rooker

Separating concerns is something we as developers are used to thinking about in terms of code. But the same also applies to identity management. If you’ve dabbled in AWS, you can get started right away with a root account. However, when it goes beyond dabbling, it might be a good idea to start splitting up responsibilities.

Continue reading →

shadow-left