mscharhag, Programming and Stuff;

A blog about programming and software development topics, mostly focused on Java technologies including Java EE, Spring and Grails.

  • Wednesday, 12 February, 2020

    REST / HTTP methods: POST vs. PUT vs. PATCH

    Each HTTP request consists of a method (sometimes called verb) that indicates the action to be performed on the identified resource.

    When building RESTful Web-Services the HTTP method POST is typically used for resource creation while PUT is used for resource updates. While this is fine in most cases it can be also viable to use PUT for resource creation. PATCH is an alternative for resource updates as it allows partial updates.

    In general we can say:

    • POST requests create child resources at a server defined URI. POST is also used as general processing operation
    • PUT requests create or replace the resource at the client defined URI
    • PATCH requests update parts of the resource at the client defined URI

    But let's look a bit more into details and see how these verbs are defined in the HTTP specification. The relevant part here is section 9 of the HTTP RFC (2616).

    POST

    The RFC describes the function of POST as:

    The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line.

    This allows the client to create resources without knowing the URI for the new resource. For example, we can send a POST request to /projects to create a new project. The server can now create the project as a new subordinate of /project, for example: /projects/123. So when using POST for resource creation the server can decide the URI (and typically the ID) of the newly created resources.

    When the server created a resource, it should respond with the 201 (Created) status code and a Location header that points to the newly created resource.

    For example:

    Request:

    POST /projects HTTP/1.1
    Content-Type: application/json
    
    {
        "name": "my cool project",
        ...
    }
    

    Response:

    HTTP/1.1 201 Created
    Location: https://cool.api.com/projects/123
    

    POST is not idempotent. So sending the same POST requests multiple times can result in the creation of multiple resources. Depending on your needs this might be a useful feature. If not, you should have some validation in place and make sure a resource is only created once based on some custom criteria (e.g. the project name has to be unique).

    The RFC also tells us:

    The action performed by the POST method might not result in a resource that can be identified by a URI. In this case, either 200 (OK) or 204 (No Content) is the appropriate response status, depending on whether or not the response includes an entity that describes the result.

    This means that POST does not necessarily need to create resources. It can also be used to perform a generic action (e.g. starting a batch job, importing data or process something).

    PUT

    The main difference between POST and PUT is a different meaning of the request URI. The HTTP RFC says:

    The URI in a POST request identifies the resource that will handle the enclosed entity. [..] In contrast, the URI in a PUT request identifies the entity enclosed with the request [..] and the server MUST NOT attempt to apply the request to some other resource.

    For PUT requests the client needs to know the exact URI of the resource. We cannot send a PUT request to /projects and expect a new resource to be created at /projects/123. Instead, we have to send the PUT request directly to /projects/123. So if we want to create resources with PUT, the client needs to know (how to generate) the URI / ID of the new resource.

    In situations where the client is able to generate the resource URI / ID for new resources, PUT should actually be preferred over POST. In these cases the resource creation is typically idempotent, which is a clear hint towards PUT.

    It is fine to use PUT for creation and updating resources. So sending a PUT request to /projects/123 might create the project if it does not exist or replace the existing project. HTTP status codes should be used to inform the client if the resource has been created or updated.

    The HTTP RFC tells us:

    If a new resource is created, the origin server MUST inform the user agent via the 201 (Created) response. If an existing resource is modified, either the 200 (OK) or 204 (No Content) response codes SHOULD be sent to indicate successful completion of the request.

    Generally speaking, if the exact resource URI is known and the operation is idemponent, PUT is typically a better choice than POST. In most situations this makes PUT a good choice for update requests.

    However, there is one quirk that should be remembered for resource updates. According to the RFC, PUT should replace the existing resource with the new one. This means we cannot do partial updates. So, if we want to update a single field of the resource, we have to send a PUT request containing the complete resource.

    PATCH

    The HTTP PATCH method is defined in RFC 5789 as an extension to the earlier mentioned HTTP RFC. While PUT is used to replace an existing resource, PATCH is used to apply partial modifications to a resource.

    Quoting the RFC:

    With PATCH, [..], the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version.  The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources;

    So PATCH, similar to POST, might also affect resources other than the one identified by the Request URI.

    Often PATCH requests use the same format as the resource that should be updated and just omit the fields that should not change. However, it does not have to be this way. It is also fine to use a separate patch format, which describes how the resource should be modified.

    PATCH is neither safe nor idempotent.

    Maybe you are wondering in which situations a partial resource update is not idempotent. A simple example here is the addition of an item to an existing list resource, like adding a product to a shopping cart. Multiple (partial) update requests might add the product multiple times to the shopping cart.

    More detailed information about the usage of PATCH can be found in my post about partial updates with PATCH.

     

    Interested in more REST related articles? Have a look at my REST API design page.

     

  • Sunday, 9 February, 2020

    HTTP methods: Idempotency and Safety

    Idempotency and safety are properties of HTTP methods. The HTTP RFC defines these properties and tells us which HTTP methods are safe and idempotent. Server application should make sure to implement the safe and idempotent semantic correctly as clients might expect it.

    Safe HTTP methods

    HTTP methods are considered safe if they do not alter the server state. So safe methods can only be used for read-only operations. The HTTP RFC defines the following methods to be safe: GET, HEAD, OPTIONS and TRACE.

    In practice it is often not possible to implement safe methods in a way they do not alter any server state.

    For example, a GET request might create log or audit messages, update statistic values or trigger a cache refresh on the server.

    The RFC tells us here:

    Naturally, it is not possible to ensure that the server does not generate side-effects as a result of performing a GET request; in fact, some dynamic resources consider that a feature. The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them.

    Idempotent HTTP methods

    Idempotency means that multiple identical requests will have the same outcome. So it does not matter if a request is sent once or multiple times. The following HTTP methods are idempotent: GET, HEAD, OPTIONS, TRACE, PUT and DELETE. All safe HTTP methods are idempotent but PUT and DELETE are idempotent but not safe.

    Note that idempotency does not mean that the server has to respond in the same way on each request.

    For example, assume we want to delete a project by an ID using a DELETE request:

    DELETE /projects/123 HTTP/1.1

    As response we might get an HTTP 200 status code indicating that the project has been deleted successfully. If we send this DELETE request again, we might get an HTTP 404 as response because the project has already been deleted. The second request did not alter the server state so the DELETE operation is idempotent even if we get a different response.

    Idempotency is a positive feature of an API because it can make an API more fault-tolerant. Assume there is an issue on the client and requests are send multiple times. As long as idempotent operations are used this will cause no problems on the server side.

    HTTP method overview

    The following table summarizes which HTTP methods are safe and idempotent:

    HTTP Method Safe Idempotent
    GET Yes Yes
    HEAD Yes Yes
    OPTIONS Yes Yes
    TRACE Yes Yes
    PUT No Yes
    DELETE No Yes
    POST No No
    PATCH No No

     

    If you are interested in more REST related articles, have a look at my REST API design page to find more articles.

     

  • Saturday, 1 February, 2020

    Validating code and architecture constraints with ArchUnit

    Introduction

    ArchUnit is a library for checking Java code against a set of self defined code and architecture constraints. These constraints can be defined in a fluent Java API within unit tests. ArchUnit can be used to validate dependencies between classes or layers, to check for cyclic dependencies and much more. In this post we will create some example rules to see how we can benefit from ArchUnit.

    Required dependency

    To use ArchUnit we need to add the following dependency to our project:

    <dependency>
    	<groupId>com.tngtech.archunit</groupId>
    	<artifactId>archunit-junit5</artifactId>
    	<version>0.13.0</version>
    	<scope>test</scope>
    </dependency>

    If you are still using JUnit 4 you should use the archunit-junit4 artifact instead.

    Creating the first ArchUnit rule

    Now we can start creating our first ArchUnit rule. For this we create a new class in our test folder:

    @RunWith(ArchUnitRunner.class) //only for JUnit 4, not needed with JUnit 5
    @AnalyzeClasses(packages = "com.mscharhag.archunit")
    public class ArchUnitTest {
    
        // verify that classes whose name name ends with "Service" should be located in a "service" package
        @ArchTest
        private final ArchRule services_are_located_in_service_package = classes()
                .that().haveSimpleNameEndingWith("Service")
                .should().resideInAPackage("..service");
    }

    With @AnalyzeClasses we tell ArchUnit which Java packages should be analyzed. If you are using JUnit 4 you also need to add the ArchUnit JUnit runner.

    Inside the class we create a field and annotate it with @ArchTest. This is our first test.

    We can define the constraint we want to validate by using ArchUnits fluent Java API. In this example we want to validate that all classes whose name ends with Service (e.g. UserService) are located in a package named service (e.g. foo.bar.service).

    Most ArchUnit rules start with a selector that indicates what type of code units should be validated (classes, methods, fields, etc.). Here, we use the static method classes() to select classes. We restrict the selection to a subset of classes using the that() method (here we only select classes whose name ends with Service). With the should() method we define the constraint that should be matched against the selected classes (here: the classes should reside in a service package).

    When running this test class all tests annotated with @ArchTest will be executed. The test will fail, if ArchUnits detects service classes outside a service package.

    More examples

    Let's look at some more examples.

    We can use ArchUnit to make sure that all Logger fields are private, static and final:

    // verify that logger fields are private, static and final
    @ArchTest
    private final ArchRule loggers_should_be_private_static_final = fields()
            .that().haveRawType(Logger.class)
            .should().bePrivate()
            .andShould().beStatic()
            .andShould().beFinal();
    

    Here we select fields of type Logger and define multiple constraints in one rule.

    Or we can make sure that methods in utility classes have to be static:

    // methods in classes whose name ends with "Util" should be static
    @ArchTest
    static final ArchRule utility_methods_should_be_static = methods()
            .that().areDeclaredInClassesThat().haveSimpleNameEndingWith("Util")
            .should().beStatic();

    To enforce that packages named impl contain no interfaces we can use the following rule:

    // verify that interfaces are not located in implementation packages
    @ArchTest
    static final ArchRule interfaces_should_not_be_placed_in_impl_packages = noClasses()
            .that().resideInAPackage("..impl..")
            .should().beInterfaces();

    Note that we use noClasses() instead of classes() to negate the should constraint.

    (Personally I think this rule would be much easier to read if we could define the rule as interfaces().should().notResideInAPackage("..impl.."). Unfortunately ArchUnit provides no interfaces() method)

    Or maybe we are using the Java Persistence API and want to make sure that EntityManager is only used in repository classes:

    @ArchTest
    static final ArchRule only_repositories_should_use_entityManager = noClasses()
            .that().resideOutsideOfPackage("..repository")
            .should().dependOnClassesThat().areAssignableTo(EntityManager.class);

    Layered architecture example

    ArchUnit also comes with some utilities to validate specific architecture styles.

    For example can we use layeredArchitecture() to validate access rules for layers in a layered architecture:

    @ArchTest
    static final ArchRule layer_dependencies_are_respected = layeredArchitecture()
            .layer("Controllers").definedBy("com.mscharhag.archunit.layers.controller..")
            .layer("Services").definedBy("com.mscharhag.archunit.layers.service..")
            .layer("Repositories").definedBy("com.mscharhag.archunit.layers.repository..")
            .whereLayer("Controllers").mayNotBeAccessedByAnyLayer()
            .whereLayer("Services").mayOnlyBeAccessedByLayers("Controllers")
            .whereLayer("Repositories").mayOnlyBeAccessedByLayers("Services");

    Here we define three layers: Controllers, Services and Repositories. The repository layer may only accessed by the service layer while the service layer may only be accessed by controllers.

    Shortcuts for common rules

    To avoid that we have to define all rules our self, ArchUnit comes with a set of common rules defined as static constants. If these rules fit our needs, we can simply assign them to @ArchTest fields in our test.

    For example we can use the predefined NO_CLASSES_SHOULD_THROW_GENERIC_EXCEPTIONS rule if we make sure no exceptions of type Exception and RuntimeException are thrown:

    @ArchTest
    private final ArchRule no_generic_exceptions = NO_CLASSES_SHOULD_THROW_GENERIC_EXCEPTIONS;

    Summary

    ArchUnit is a powerful tool to validate a code base against a set of self defined rules. Some of the examples we have seen are also reported by common static code analysis tools like FindBugs or SonarQube. However, these tools are typically harder to extend with your own project specific rules and this is where ArchUnit comes in.

    As always you can find the Sources from the examples on GitHub. If you are interested in ArchUnit you should also check the comprehensive user guide.

  • Thursday, 23 January, 2020

    Creating an API Gateway with Zuul and Spring Boot

    Introduction

    When working with micro services it is common to have unified access-point to your system (also called API Gateway). Consumers only talk with the API Gateway and not with the services directly. This hides the fact that your system is composed out of multiple smaller services. The API Gateway also helps solving common challenges like authentication, managing cross-origin resource sharing (CORS) or request throttling.

    Zuul is a JVM-based API Gateway developed and open-sourced by Netflix. In this post we will create a small Spring application that includes a zuul proxy for routing requests to other services.

    Enabling zuul proxy

    To use zuul in a project we have to add the spring-cloud-starter-netflix-zuul dependency. If we want to use the spring zuul actuator endpoint (more on this later), we also need to add the spring-boot-starter-actuator dependency.

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-netflix-zuul</artifactId>
    </dependency>
    
    <!-- optional -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
    

    Next we have to enable the zuul proxy using @EnableZuulProxy in our spring boot application class (or any other spring @Configuration class)

    @SpringBootApplication
    @EnableZuulProxy
    public class ZuulDemoApplication {
        ...
    }
    

    Now we can start configuring our routes.

    Configuring routes

    Routes describe how incoming requests should be routed by zuul. To configure zuul routes we only have to add a few lines to our spring boot application.yml (or application.properties) file:

    application.yml:

    zuul:
      routes:
        users:
          path: /users/**
          url: https://users.myapi.com
        projects:
          path: /projects/**
          url: https://projects.myapi.com
    

    Here we define the routes for two endpoints: /users and /projects: Requests to /users will be routed to https://users.myapi.com while requests to /projects are routed to https://projects.myapi.com.

    Assume we start this example application locally and send a GET request to http://localhost:8080/users/john. This request matches the zuul route /users/** so zuul will forward the request to https://users.myapi.com/john.

    When using a service registry (like Eureka) we can alternatively configure a service id instead of an url:

    zuul:
      routes:
        users:
          path: /users/**
          serviceId: user_service
    

    Another useful option is sensitiveHeaders, which allows us to remove headers before the request is routed to another service. This can be used to avoid leaking of sensitive headers into external servers (e.g. security tokens or session ids).

    zuul:
      routes:
        users:
          path: /users/**
          url: https://users.myapi.com      
          sensitiveHeaders: Cookie,Set-Cookie,Authorization
    

    Note that the shown example headers (Cookie,Set-Cookie,Authorization) are the default value of the sensitiveHeaders property. So these headers will not be passed, even if sensitiveHeaders is not specified.

    Request / Response modification with filters

    We can customize zuul routing using filters. To create a zuul filter we create a new spring bean (marked with @Component) which extends from ZuulFilter:

    @Component
    public class MyFilter extends ZuulFilter {
    
        @Override
        public String filterType() {
            return FilterConstants.PRE_TYPE;
        }
    
        @Override
        public int filterOrder() {
            return FilterConstants.PRE_DECORATION_FILTER_ORDER - 1;
        }
    
        @Override
        public boolean shouldFilter() {
            return true;
        }
    
        @Override
        public Object run() {
            RequestContext context = RequestContext.getCurrentContext();
            context.addZuulRequestHeader("my-auth-token", "s3cret");
            return null;
        }
    }
    

    ZuulFilter requires the definition of four methods:

    • Within filterType() we define that our filter should run before (PRE_TYPE) the actual routing. If we want to modify the response of the service before it is send back to the client, we can return POST_TYPE here.
    • With filterOrder() we can influence to order of filter execution
    • shouldFilter() indicates if this filter should be executed (= calling the run() method)
    • in run() we define the actual filter logic. Here we add a simple header named my-auth-token to the request that is routed to another service.

    Filters allow us to modify the request before it is send to the specified service or to modify the response of the service before it is send back to the client.

    Actuator endpoint

    Spring cloud zuul exposed an additional Spring Boot actuator endpoint. To use this feature we need to have spring-boot-starter-actuator in the classpath.

    By default the actuator endpoint is disabled. Within application.yml we enable specific actuator endpoints using the management.endpoints.web.exposure.include property:

    management:
      endpoints:
        web:
          exposure:
            include: '*'
    

    Here we simply enable all actuator endpoints. More detailed configuration options can be found in the Spring Boot actuator documentation.

    After enabling the zuul actuator endpoint we can send a GET request to http://localhost:8080/actuator/routes to get a list of all configured routes.

    An example response might look like this:

    {
        "/users/**":"https://users.myapi.com",
        "/projects/**":"project_service"
    }
    

    Summary

    With Spring cloud you can easliy integrate a zuul proxy in your application. This allows you the configuration of routes in .yml or .properties files. Routing behaviour can be customized with Filters.

    More details on spring's support for zuul can be found in the official spring cloud zuul documentation. As always you can find the examples shown in this post on GitHub.

  • Monday, 6 January, 2020

    Method parameter validation with Spring and JSR 303

    Spring provides an easy way to validate method parameters using JSR 303 bean validation. In this post we will see how to use this feature.

    Setup

    First we need to add support for method parameter validation by creating a MethodValidationPostProcessor bean:

    @Configuration
    public class MyConfiguration {
        @Bean
        public MethodValidationPostProcessor methodValidationPostProcessor() {
            return new MethodValidationPostProcessor();
        }
    }
    

    Validating method parameters

    After registering the MethodValidationPostProcessor we can enable method parameter validation per bean by adding the @Validated annotation. Now we can add Java Bean validation annotations to our method parameter to perform validation.

    @Service
    @Validated
    public class UserService {
    
        public User getUser(@NotBlank String uuid) {
            ...
        }
    }
    

    Here we added a @NotBlank annotation to make sure the passed uuid parameter is not null or an empty string. Whenever an invalid uuid is passed a ContraintViolationException will be thrown.

    Besides simple parameter validation we can also validate objects annotated with JSR 303 annotations.

    For example:

    public class User {
    
        @NotBlank
        private String name;
    
        // getter + setter
    }
    
    @Service
    @Validated
    public class UserService {
    
        public void createUser(@Valid User user) {
            ...
        }
    }
    

    By adding @Valid (not @Validated) we mark the user parameter for validation. The passed user object will then be validated based on the validation constraints defined in the User class. Here the name field should not be null or contain an empty string.

    Note that validation also works for Controller method parameters. You can use this to validate path variables, request parameters or other controller method parameters.

    For example:

    @RestController
    @Validated
    public class UserController {
    
        @GetMapping("/users/{userId}")
        public ResponseEntity<User> getUser(
            @PathVariable @Pattern(regexp = "\\w{2}\\d{8}") String userId
        ) {
            // ...
        }
    }
    

    Here we use the @Pattern annotation to validate the path variable userId with an regular expression.

    How does this work?

    The MethodValidationPostProcessor bean we registered is a BeanPostProcessor that checks each bean if it is annotated with @Validated. If this is the case Spring creates a Proxy objects and registers AOP interceptor (MethodValidationInterceptor) to intercept method calls and performs validation. The actual bean method is only called if the validation was successful.

    Limitations

    Because this feature relies on AOP interceptors it works only on Spring managed beans. It also only works if the method with validation annotations is called from outside the class.

    Let's look at an example to better understand this:

    @Service
    @Validated
    public class UserService {
    
        public void updateUsername(String uuid, String newName) {
            User user = getUser(uuid); // no validation
    
            // ...
        }
    
        public User getUser(@NotBlank String uuid) {
            return new User("John");
        }
    }
    

    Here the getUser(..) method is called inside the updateUsername(..) method. Therefore, the validation of the uuid parameter in getUser(..) is not triggered. There is no AOP proxy involved here.

    Outside classes usually access the class via a reference retrieved through Spring Dependency Injection. In this case, Spring injects the proxy object and everything is works as expected.

     

    As always you can find the sources for the shown examples on GitHub.