Adventure Seized! Highlights from SpringOne Platform 2018
Mike Koleno, VP of Technology
The theme for this year’s SpringOne Platform was adventure, and, as with any journey, there comes a time when we must rely on one another for fresh and alluring information to succeed. The conference allowed for the formation of new and impactful partnerships, while allowing attendees to learn about cloud-native design patterns, technologies, and organizational transformation.
Our team at Solstice sent 12 Spring developers and architects to SpringOne to connect, share, and, most importantly, learn. Here are some of our key findings.
1. Reactive tech revolution gains momentum
Announcements on new technologies supporting the Reactive Programming model were aplenty during SpringOne. Reactive Programming enables even more performant and scalable applications by producing non-blocking applications that are asynchronous and event-driven. It also takes advantage of streams using push/pull backpressure in which consumers request data only when they are ready to handle the data, and producers send only the amount of data that consumers can handle.
Below are some of the Reactive highlights:
• WebFlux framework: The WebFlux framework uses functions in Java 8, such as lambda routing and handling. On the server side, it utilizes new data objects such as Mono and Flux and new functions such as HandlerFunction and RouterFunction. On the client side, it utilizes a new WebClient object. Many performance gains can be obtained using the new WebClient, especially if you need to make multiple requests to the server to retrieve and compose the data.
• Reactive streams and the RSocket protocol: As detailed further below, the RSocket protocol enables the Reactive stream’s backpressure to be communicated and handled more efficiently across the network between the consumer and producer to request and retrieve data only when needed.
• Reactor Californium release: A sneak peek into the newer versions of Reactor Core 3.2 and Reactor Netty 0.8 was given, citing many code examples. Some of the newer features of Reactor Core included more resilient message passing and improved introspection. Reactor Netty is the runtime engine that is needed to support the reactive stream’s backpressure.
2. R2 (not D2) DBC: A new, non-blocking data I/O from the Pivotal Team
One of the biggest bottlenecks to true Reactive Programming is the database and synchronous blocking I/O behavior. R2DBC is a native reactive database driver and not just a wrapper around JDBC — it fully supports functional reactive database access and allows execution of database SQL, fully utilizing the WebFlux framework.
Spring Data connectors have relied on blocking I/O operations until now. With the release of the Reactive Repository Database Connector, we can fully leverage promises in the data space allowing for full streaming, asynchronous UI loading. The Spring Reactive stack brought several tools that allow enterprises to better leverage just-in-time loading styles and strengthen UI/UX experiences across the board. Data-heavy applications have been detrimental to UX because of the requirement that all data be served at once for a page. This new tool allows for utilizing asynchronous loading of data as a stream and reduces the extreme database tuning associated with reducing load times in data-heavy applications.
3. RSocket: A modern, reactive protocol for modern, reactive applications
“I personally believe Reactive Programming is the next frontier in Java for high-efficiency applications,” said Ben Hale, Java Experience Lead at Cloud Foundry. But there are two major roadblocks to Reactive Programing: data access and networking. Although R2DBC was designed to address the data access problem, RSocket is intended to exclusively address the networking side of the equation.
Originally developed by Netflix, RSocket is a streaming, message-based protocol for applications based on reactive principles. The best analogy we heard all week was that the HTTP and HTTP/2 protocols were designed for document semantics, whereas the RSocket protocol was designed for application semantics.
4. Batch processing at scale
Although new programming models are always fun to talk about, certain technology paradigms, such as batch, have been around forever and are heavily entrenched in enterprise computing today. In some cases, batch processing is still the most efficient way of utilizing computing resources today. Moving batch processing from data centers to the cloud enables scaling and cost reduction but opens a Pandora’s box of complex issues with batch jobs in the cloud such as synchronization, orchestration, parallelization, resiliency, and job progression feedback.
Spring Batch 4.1 continues the transformation of “batch processing at scale” from data centers to the cloud that was started with Spring Batch 4.0. It enables scaling of batch processing in the cloud with simple configuration or minimal coding. Batch jobs steps can be effectively parallelized between cloud nodes and/or process threads.
Spring Cloud Data Flow coupled with Spring Batch 4.1 allows for batch orchestration and tracks job progression feedback in an intuitive, visual way.
5. AWS + PKS: Pivotal’s newest entry to enterprise Kubernetes
Pivotal’s newest release of PKS (1.2) adds support for enterprise-grade Kubernetes on AWS. This is an addition to preexisting PKS support for vSphere and Google Cloud Platform.
By providing common expanded options on top of a cloud infrastructure, PKS offers consistency for multi-cloud consumers by providing cloud agnostic Kubernetes for the enterprise. This is great for consumers who have or are thinking about integrating enterprise Kubernetes in their multi-cloud infrastructures but would like to maintain consistency across their organizational practices without introducing new and jarring practices. Consumers will be able to leverage what they already know, what they already have, and what they can also potentially add in their existing multi-cloud infrastructures, which in turn will facilitate better efficiency, greater flexibility, and higher delivery throughput.
The buck does not stop with AWS. Pivotal has gone full steam ahead with PKS to target additional platforms in the future and is projected to continue to expand across multi-cloud for cloud platforms like Azure.
6. CredHub on K8s: Shhh! It’s a secret
“Configuring credentials is hard. Leaking credentials is easy. And detecting leaked credentials is really hard,” noted Pivotal’s platform architect Peter Blum. CredHub’s announcements at SpringOne highlighted the ease of secret management on Kubernetes. Today, CredHub provides native integration of CredHub on K8s.
For developers who have worked with the platform, it is widely understood that K8s has its own challenges in managing secrets. ETCD encryption is not easy or available by default in K8s. Secrets on K8s are also likely to get exposed if and when master VMs are compromised.
CredHub deals with these problems by providing a centralized place to generate, store, encrypt, and rotate credentials. It uses the HSM encryption provider. Kubernetes deployments request credentials at runtime from CredHub and inject them into the applicable pod. Webhook CredHub annotation is added in deployment YAML to request appropriate credentials. Mutating Admission Webhook, K8s controller, and initContainer are used together to pull credentials from CredHub at runtime into the pod.
In turn, you can access CredHub using the BOSH config server, CredHub CLI, and REST Client. Lastly, CredHub also has a REST-compatible interface that provides operations such as Get/Set/Generate/Delete for credentials and permissions.
This post was co-authored by Joe Nedumgottil, Ed Depaz, Rohini Kulkarni, Keun Lee, Alexander Bronshtein, and Derrick Anderson.