Archives November 2022

API Security: from Defense-in-Depth (DiD) to Zero Trust

Key Takeaways

  • Only a few companies have an API security policy that includes dedicated API testing and protection
  • Defense-in-Depth is a multi-layered defense that provides different types of protection: boundary defense, observability, and authentication
  • Authentication is the most important and can be achieved through password complexity, periodic modification, and two-factor authentication
  • To improve efficiency in security management, one can change from “finding bad people” to “identifying good people”, by utilizing the allowlisting approach
  • Zero Trust is the next level for API security, though it is not a silver bullet

Stats of API Security

According to Salt Security’s 2022 API Security Survey:

  • 95% of the more than 250 survey respondents said they’ve experienced an API security incident in the past 12 months
  • only 11% of respondents have an API security strategy that includes dedicated API testing and protection and 34% lack any security strategy at all for APIs
  • shift-left tactics are falling short, with more than 50% of respondents saying developers, DevOps, or DevSecOps teams are responsible for API security while 85% acknowledge their existing tools are not very effective in stopping API attacks
  • when asked about their biggest concern about their company’s API program, 40% of respondents highlighted gaps in security as their top worry
  • 94% of API exploits are happening against authenticated APIs, according to Salt customer data
  • stopping attacks tops the list of most valuable attributes of an API security platform
  • 40% of respondents are grappling with APIs that change at least every week, with 9% saying their APIs change daily

From the survey, we could see that nearly all companies have experienced API security incidents. However, only 11% of companies have an API security policy that includes dedicated API testing and protection.

So, what kinds of protection should a company build to defend against these attacks? Security problems are not unique to the Internet age. Over the last thousands of years of human history, various countries have been exploring and practicing multiple defense strategies and fortifications. These experiences and ideas are also applicable to the field of network security.

For example, WAF (Web Application Firewall) is analogous to castle walls, identity authentication is equal to the commanders’ ID card, and honeypots are used to lure attackers away from high-profile targets. Among these war strategies, one of the most effective methods is Defense-in-Depth (DiD).


Defense-in-Depth is a multi-layered defense strategy that provides different types of protection at each line of defense.

DiD can be roughly divided into three different key areas of defense:

Defending the Borders

Boundary defense is the most basic and common type of defense. Almost all companies invest in boundary defenses, such as WAFs, which use regular expressions and IP denylists to defend against known attack methods and security vulnerabilities.

Most of the so-called attacks are initiated by “script kiddies”, relatively unskilled individuals who do not have strong technical backgrounds or hacker mindsets. They can only attack targets in batches by using pre-written scripts. In this scenario, boundary defense can well resist these indiscriminate attacks.

As the protection of the outermost layer, it has always been one of the necessary defense methods, though, from a technical point of view, WAF does not necessarily require strong technical skills.

In addition to WAFs, some defense tools are explicitly designed for bots. Using bots to carry out “credential stuffing” attacks is a standard method to attempt to steal high-value digital assets. The strategy is to buy login information, such as leaked emails/passwords, and then attempt to log in to other websites in batches. The most efficient defense method to combat credential stuffing is to identify the bot and intercept all requests made by the bot.

The basic strategy behind bot interception is a defense tool deployed between the server and client acting as an intermediary. When the server returns a response page, the tool inserts JavaScript into that response. It requires the client (browser or bot) to execute the JavaScript to calculate the token and put it into the cookie. After the tool receives the token, it will judge whether the other party is malicious based on the token value and take corresponding actions. It can also encrypt specified page contents (such as the URL on the page or the content of the form).

Although, in theory, this kind of encryption is not difficult for a determined cryptography enthusiast (because a perfect key establishment mechanism is not implemented between the client and the server), it is enough to stop hackers with penetration tools.

In an enterprise server-side architecture, the WAF, SSL gateway, traffic gateway, and API gateway are all connected in series, with upstream and downstream relationships. Each can be deployed separately or combined into one component. Take Apache APISIX as an example: it provides some standard features for boundary defense: IP allowlists and denylists, a WAF rules engine, fault injection, and rate limiting. When used in combination, it can block almost all indiscriminate attacks.

Detecting Intruders

In network security, the scariest event is not actually finding a security incident, but never finding one. The latter means that you may have been hacked many times without realizing it.

Being able to observe security risks is critical in combating targeted attacks. After a hacker has breached the outermost layer of defenses, we need observability mechanisms to identify which traffic is likely the malicious attack traffic.

Common means of implementing security observability are honeypots, IDS (Intrusion Detection System), NTA (Network Traffic Analysis), NDR (Network Detection and Response), APT (Advanced Persistent Threat), and threat intelligence. Among them, honeypots are one of the oldest methods. By imitating some high-value targets to set traps for malicious attackers, they can analyze attack behaviors and even help locate attackers.

On the other hand, APT detection and some machine learning methods are not intuitive to evaluate. Fortunately, for most enterprise users, simple log collection and analysis, behavior tracking, and digital evidence are enough for the timely detection of abnormal behaviors.

Machine learning is an advanced but imperfect technology with some problems like false or missed reports. However, most enterprises don’t currently need such solutions. They care more about logs collecting and tracking because they need to collect digital evidence. Digital evidence not only serves as evidence of the crime but also helps companies better understand the weaknesses of their internal systems.

For example, when a hacker attacks a system and steals data, a behavior tracking solution would track a number of things including: when the hacker attacked, which servers were accessed, which data has been exfiltrated, and how the hacker behaved within the environment. These access logs can be found and gathered as digital evidence.

At the observability level, Apache APISIX can do more things than tracing, logging, and metrics. It can use plugins to simulate high-value targets to confuse the attackers, utilize the traffic mirroring functionality to send partial requests to the blocking and non-blocking security detection tools for analysis, and make quick security decisions to stop the attack.

Preventing Lateral Movement

When the attacker breaks through the outer defense line and enters the intranet, it is time for authentication and access control to play a role. This defense method is similar to showing your ID cards to purchase alcohol and your passport at the airport checkpoint. Without the corresponding identity and authority, it is impossible to enter the corresponding system.

This line of defense reflects a company’s basic security skills and technical strength. Many companies’ internal systems do not have well-established authentication and authorization architecture, and implementing those solutions requires long-term and continuous investment.

For example, numerous systems have been used for years in financial institutions, such as banks, securities, and insurance companies. The significant number of employees and legacy systems requires a high cost in both time and money to unify the identity authentication by implementing SSO (Single Sign-On).

This line of defense essentially puts obstacles everywhere for the attackers, making it impossible for them to move laterally within the network.

It is worth mentioning that there are not many companies that do well in seemingly simple password management. Password complexity, mandatory periodic modification, and mandatory two-factor authentication (SMS or dynamic password) for critical systems are easier said than done. Only companies who understand network security well can implement them well. There are a number of reasons why this is true but the key reasons include:

  • Many companies pay insufficient attention to password management because they believe that they are only effective within the internal network, which is relatively safe.
  • It will burden the IT department if companies conduct regular secret changes because there are many employees whose secrets are different, which is hard to manage.
  • Unless the policy is required at the company level, implementing consistent and effective secret management is challenging.

This third line of defense is the most important one. Boundary protection and security threat observation, to a large extent, exist to help the third line of defense. If an application system itself is not safe, no matter how strong the first two lines of defense are, there will be attacks that slip through the net.

This line of defense is where an API Gateway comes in strong as it provides a number of key authentication features:

  • Various authentication methods, such as JWT and key auth
  • Integrating with multiple authentication systems such as OKTA and Auth 2.0
  • TLS and mTLS encryption of traffic
  • Automatic rotation of keys

In summary, these three defense layers require inter-department cooperation to block security attacks effectively. These three defense layers are typically controlled by different departments: WAF by the Security Operations team, observability by the R&D teams, and authentication by the IT department. Each area will use different tools, such as WAFs, OpenTelementry, and Keycloak, to implement their respective solution. This split of responsibility across many teams is why effective blocking requires inter-department cooperation.

So, is there a more efficient way, the so-called “silver bullet,” to solve all security problems?

From “Denylisting” to “Allowlisting”

The network security community has been thinking about this issue for a long time. The strategy for most security defense and detection methods is to look for a small number of security threats within a massive quantity of data, similar to finding a needle in a haystack. There will inevitably be both false positives and false negatives.

What if we change our thinking and turn “finding bad people” into “identifying good people”? Will it help us shine a new light on the problems?

Over ten years ago, some antivirus software companies began to make such an attempt. Their logic was to add the commonly used software to an allowlist, identifying the executable programs one by one, and then the rest would be viruses. In this case, any new virus could not escape the detection of antivirus software. This plan was quite ideal. Nevertheless, it took four or five years for the software companies to implement it, and it was not entirely allowlisting but hierarchical management.

The allowlisting approach is equally applicable in API security. For example, a company provides a payment API interface, which requires a token to access. If there is no token or the token cannot properly be accessed, then it must be a malicious request and needs to be rejected directly.

Ideally, all APIs should have similar authentication and access controls, and only authenticated access is allowed. Although this cannot defend against internal threats, social engineering, and 0-day attacks, it will significantly increase the threshold of attacks, making the cost high and indiscriminate attacks impossible.

For attackers, when the ROI (Return on Investment) becomes unreasonable, they will immediately turn around and look for other easy-to-break targets.

Based on the ideas of defense in depth and allowlisting, a “zero trust” security framework has gradually evolved, hoping to solve network security problems once and for all.

Is Zero Trust the Silver Bullet for API Security?

What is Zero Trust? Put simply: there is no trusted client in the system, so you will need to verify your identity everywhere.

Whether it is an external request or internal access, a bastion or a springboard, a mobile phone or a PC, an ordinary employee or a CEO, none can be trusted. Access to systems and APIs is allowed only after authentication.

It seems that there are no loopholes except that it is troublesome to verify the identity everywhere. So is zero trust the silver bullet for cybersecurity?

Simply put: no, zero trust is not a silver bullet.

Zero trust is a comprehensive security framework and strategy. It requires adding strict and unified identity authentication systems to all terminals, BYODs (Bring Your Own Devices), servers, APIs, microservices, data storage, and internal services. Think of zero trust as a safety air cushion. We can understand the zero trust model by using the Wooden Bucket Theory, which states that a wooden bucket can only hold as much water as its shortest plank. Translating this to security, our defenses are only as good as our weakest spot. This implies that the hacker will always attack the weakest part. If the bucket leaks, it doesn’t matter how much water it could hold. However, we also can’t say, “90% coverage of a zero trust model is not substantially better than 0% coverage” because the measures can increase the cost of hackers’ attacks. If we can realize 90% coverage of a zero trust model, then almost 80%-90% of vicious attacks could be intercepted.

The implementation of zero trust is complicated. Imagine adding identification equipment in all transportation hubs, such as airports and high-speed railways. It is incredibly costly in terms of time and money.

In a large enterprise, there will be hundreds of systems, tens of thousands of APIs and microservices, and hundreds of thousands of clients. It takes great effort and cost to build a complete zero-trust system. Therefore, zero trust is mainly implemented in government, military, and large enterprises. For most enterprises, it is wiser to learn from the idea of zero trust and build a security defense system with a higher ROI.

There are two core components of Zero Trust:

  • Identity and Access Management
  • API Gateway with integrated security

The focus on these two components can make the implementation of zero trust more realistic.

Note here that zero trust is not flawless. Zero-trust solutions cannot fully defend against zero-day or social engineering attacks, although they can greatly reduce the blast radius of those attacks.


However, security is a never-ending game of cat and mouse because attackers are always hoping to find means to acquire high-value digital assets or to achieve their destructive outcomes. Defense alone cannot avoid the attacker’s guns and arrows.

Consequently, it is also necessary to improve the security awareness of developers and reduce exposure to vulnerable surfaces as much as possible.

Developers stand in the middle of code and application. On the left is the “code”, while on the right is the “application”. Hence we need to pay more attention to the left side as the code is the root of all the problems. We should adopt the “shift left” method, which means a DevOps team needs to guarantee application security at the earliest stages in the development lifecycle. The vulnerable exposure can be significantly reduced if the developers improve their security awareness.

PHP 8 – Classes and Enums

Key Takeaways

  • PHP 8.1 adds support for read-only properties that make class properties invariant and unmodifiable.
  • PHP 8.0 has a new feature that automatically promotes class constructor parameters to corresponding class properties with the same name if the constructor parameters are declared with a visibility modifier and are not of type callable.
  • PHP 8.1 adds support for final class and interface constants, and for interface constants that can be overridden. 
  • As of PHP 8.0, the special ::class constant can be used on objects, and as of PHP 8.1 objects can be used in define().
  • PHP 8.1 adds support for enumerations, or enums for short, to declare an enumerated set of values that are similar to, though not the same as, class objects.

This article is part of the article series “PHP 8.x”. You can subscribe to receive notifications about new articles in this series via RSS.

PHP continues to be one of the most widely used scripting languages on  the web with 77.3% of all the websites whose server-side programming language is known using it according to w3tech. PHP 8 brings many new features and other improvements, which we shall explore in this article series.


In this article, we will review new PHP 8 features related to classes, including:

  • Enums,  a layer over classes to specify an enumerated list of possible values for a type 
  • The new readonly modifier for a class property, which makes the property unmodifiable after its initialization 
  • Constructor parameter promotion, useful to assign a constructor parameter value to an object property automatically.

Read-Only Class Properties

Developers have been scrounging for ways to make class properties immutable for use cases such as value objects. Often, properties must be initialized once, usually in a constructor, and are not meant to be ever modified. One alternative that has been used is to make a property private and declare only a public getter method for it. This reduces the scope for modification but does not preclude modification. To make a class property invariant, PHP 8.1 adds support for readonly properties with the condition that the property must be typed. A typed property can indeed be declared readonly with the new readonly keyword. The following script declares a readonly property of type int called $a. The property’s value is set only once in the constructor. The script outputs the value 1 when run. 

a = $a;
$a = new A(1);
echo $a->a;

To demonstrate the effect of making the property readonly, modify its value with the following assignment.

 $a->a = 2;

This will generate a error message:

Fatal error: Uncaught Error: Cannot modify readonly property A::$a 

To demonstrate that the condition that the readonly property must be typed holds, try making an untyped property readonly as in the following script:

a = $a;
$a = new A(1);

The script generates an error message:  

Fatal error: Readonly property A::$a must have type

If you don’t want a  readonly property to have a specific type, you can declare it as mixed, e.g.:

public readonly mixed $a;

In addition to the type requirement, other limitations apply to readonly properties. A readonly property cannot be declared static. Run the following script to demonstrate it:

a = $a;
$a = new A(1);

The script generates an error message:

Fatal error: Static property A::$a cannot be readonly

A readonly property can only be initialized only from the scope in which it is declared. The following script initializes a readonly property, but not in the scope in which it is declared:  


The script generates an error message when run:

Fatal error: Uncaught Error: Cannot initialize readonly property A::$a from global scope 

You may consider declaring a readonly property with a default value at the time of initialization, but this wouldn’t be particularly useful as you could use a class constant instead. Therefore, setting a default value for a readonly property has been disallowed. The following script declares a default value for a readonly property.

a = $a;

The script generates an error message when run:

Fatal error: Readonly property A::$a cannot have default value

The objective of the readonly property feature is to make a class property immutable. Therefore, a readonly property cannot be unset with unset() after initialization. The following script calls unset() on a readonly property after it has been initialized.

a = $a;
$a = new A(1);

The script generates an error message when run:

Fatal error: Uncaught Error: Cannot unset readonly property A::$a 

You could always call unset() on a readonly property before initialization as in the following script:

       $this->a = $a;        
$a = new A(1);
echo $a->a;

The script runs with no errors and outputs the value of 1

A readonly property cannot be modified by simple reassignment or by any other operator manipulation. The following script does not use a reassignment statement for a readonly property, but uses an increment operator on it. 

a = $a;       
$a = new A(1);

The effect is the same, and so is the error message:

Fatal error: Uncaught Error: Cannot modify readonly property A::$a

Specifically, it is just the readonly property that is immutable, not any objects or resources stored in it. You may modify any objects, and non-readonly properties stored in a readonly property. The following script sets the value of class property $a that is not readonly through a readonly property $obj of type object

echo $a->obj->a;

The script runs with no errors and outputs the value of 1. 

PHP 8.2 adds readonly classes as an extension  of the readonly class properties feature. If a class is declared with the readonly modifier, all class properties are implicitly readonly. The class properties in a readonly class must be typed and non-static, for example:

readonly class A
    public int $a;
    public string $b;
    public array $c;
    public function __construct() {
        $this->a = 1;
        $this->b = "hello";
        $this->c = [
                    "1" => "one",
                    "2" => "two",

Readonly classes do have some limitations in that dynamic properties cannot be defined, and only a readonly class can extend another readonly class. 

Constructor property promotion

The objective of constructor property promotion, a feature introduced in PHP 8.0, is to make class property declarations and initializations unnecessary. To elaborate, consider the following script in which class properties $pt1, $pt2, $pt3, and $pt4 are declared in the Rectangle class and initialized in the class constructor. 

pt1 = $pt1;
        $this->pt2 = $pt2;
        $this->pt3 = $pt3;
        $this->pt4 = $pt4;

With the new constructor property promotion feature the script is reduced to the following:

The constructor body may be empty or may contain other statements, which are run after the promotion of constructor arguments to the corresponding class properties. The only requirement for a constructor argument to be promoted to a class property is that it include a visibility modifier. The constructor’s argument value is automatically assigned to a class property with the same name. 

The following script snippet demonstrates the automatic promotion and initialization of public constructor arguments to class properties using the example of the Rectangle class with a call to the class constructor as follows: 


You’ll find that the class properties do get added and initialized implicitly, giving the following output.

object(Point)#1 (1) { ["pt"]=> float(0) } 
object(Point)#2 (1) { ["pt"]=> float(0) } 
object(Point)#3 (1) { ["pt"]=> float(0) } 
object(Point)#4 (1) { ["pt"]=> float(0) }

If a constructor argument does not include a visibility modifier, it is not promoted to the corresponding class property.  Not all constructor arguments have to be promoted. The following script does not promote constructor argument $pt4 as it is not declared with a visibility modifier.

Call the Rectangle class constructor and output class property values, as before. In this case, the result is different because $pt4 does not include a visibility modifier and therefore it is never promoted to a corresponding class property. A warning message is output:

Warning: Undefined property: Rectangle::$pt4

You would need to declare and initialize the $pt4 class property explicitly as in the modified script:


Now, you can call the constructor and output class properties as before with the same output.  

Another requirement for a class constructor argument to be promoted to a corresponding class property is that it is not of type callable.  The following script declares constructor arguments of type callable.

When run the script generates an error message:

Fatal error: Property Rectangle::$pt1 cannot have type callable

In the first article in the PHP 8 series, we explained how to use the new operator in initializers, including for initializing default values for function parameters. The new operator may also be used to set constructor parameter default values, along with constructor property promotion, as in the following script.


The Rectangle class constructor may be called without any constructor arguments, and the promoted property values be output:

$a = new Rectangle();

The output is:

object(Point)#2 (1) { ["pt"]=> float(0) } 
object(Point)#3 (1) { ["pt"]=> float(0) } 
object(Point)#4 (1) { ["pt"]=> float(0) } 
object(Point)#5 (1) { ["pt"]=> float(0) }

Objects use in define()

The built-in define() function is used to define named constants. With PHP 8.1 objects can be passed to define() as in the following example script.  

The output from the script is:

object(Point)#1 (1) { ["pt"]=> float(1) }

Class constants may be declared final

PHP 8.1 allows you to declare class constants using the final keyword.  Additionally, if a class constant is declared final in a class, any class extending it cannot override, or redefine, the constant’s value.   In the following script, a class constant c, which is declared to be final in class A, is redefined in a class B that extends it.

When the script is run, the following error message is generated:

Fatal error: B::c cannot override final constant A::c 

The special ::class constant can be used on objects

The special ::class constant, which allows for fully qualified class name resolution at compile time, can also be used with class objects as of PHP 8.0. One difference is that class name resolution happens at runtime with objects, unlike compile-time resolution for classes. Using ::class on an object is equivalent to calling  get_class() on the object.  The following script uses ::class on an object of class A, which output “A”. 

Interfaces constants can be overridden

As of PHP 8.1, interface constants can be overridden by a class, or interface, that inherits them. In the following script, interface constant c is overridden by a class constant with the same name. The value of the overridden constant may be the same or  different. 

Both constant values may be output:

echo A::c;
echo B::c;

The output is:


As it holds for class constants that are declared final, interface constants declared final cannot be overridden. The following script overrides an interface constant declared final.

An error message is output when script is run:

Fatal error: B::c cannot override final constant A::c

Autoload function __autoload() is removed

The __autoload() function that was deprecated in PHP 7.2.0 has been removed in PHP 8.0. If the __autoload() function is called, the following error message results:

Fatal error: Uncaught Error: Call to undefined function __autoload()



An enumeration, or enum for short, is a new feature to declare a custom type with an explicit set of possible values. The new language construct enum is used to declare an enumeration with the simplest enumeration being an empty one.

enum FirstEnum {

An enum may declare possible values using the case keyword, for example:

enum SortType {
  case Asc;
  case Desc;
  case Shuffle;

The discussion on enumerations is bundled with classes because of their similarity.

How enums are similar to classes

  • An enum is a class. The example enum SortType is a class, and its possible values are object instances of the class.
  • Enums share the same namespace as classes, interfaces, and traits.
  • Enums are autoloadable as classes are.
  • Each enum value, such as the Asc, Desc and Shuffle values for the SortType enum is an object instance. An enum value would pass an object type check.
  • An enum’s values, or the case names, are internally represented as class constants, and are therefore case-sensitive.

Enums are useful for several use cases, such as:

  • A structured alternative to a set of constants 
  • Custom type definitions
  • Data modeling
  • Monad-style programming
  • Defining a domain model
  • Validation by making unsupported values unrepresentable, resulting in reduced requirement for code testing 

We shall discuss enums with some examples. Because an enum’s values are objects they may be used where an object can be used, including as function parameter type, and function return type. In the following script, enum SortType is used as a function parameter type, and function return type.


The output from the script is :


Next, we shall use the same example of sorting an array that  we used in the first article in this series. The parameter for the sortArray function is of type SortType, which is an enum. 

function sortArray(SortType $sortType) { 

  $arrayToSort=array("B", "A", "f", "C");

The enum object value is compared using the == operator.

if ($sortType == SortType::Asc){...}

The same example as used with enums is as follows:

 $val) {
                echo "$key = $val ";
        } elseif ($sortType == SortType::Desc) {
             foreach ($arrayToSort as $key => $val) {
                echo "$key = $val ";
        } elseif ($sortType == SortType::Shuffle){
             foreach ($arrayToSort as $key => $val) {
                echo "$key = $val ";
$val = SortType::Asc;

The output of the script  for the array sorting example is:

0 = A 1 = B 2 = C 3 = f 
0 = f 1 = C 2 = B 3 = A 
0 = f 1 = A 2 = B 3 = C

Because an enum case, or possible value, is an object instance, the instanceof operator may be used with an enum value, for example:

if ($sortType instanceof SortType) {...}

An enum’s value is not converted to a string and cannot be used as an equivalent string. For example, if you call the sortArray() function with a string argument:


An error would result:

Fatal error: Uncaught TypeError: sortArray(): Argument #1 ($sortType) must be of type SortType, string given

All enum values, or cases, have a read-only property called name that has its value as the case-sensitive name of the case. The name property could be used for debugging. For example, the following print statement would output “Asc”. 

print SortType::Asc->name;  

An enum’s values must be unique, case-sensitive values. The following script has unique values:

But the following script doesn’t declare unique values:

The script generates the following error message:

 Fatal error: Cannot redefine class constant SortType::Asc

The enumerations we discussed are basic enumerations, or pure enums. A pure enum only defines pure cases with no related data. Next, we discuss another type of enums called backed enums.

Backed enums

A backed enum defines scalar equivalents of type string or int for the enum cases, for example: 

enum SortType:int {
  case Asc=1;
  case Desc=2;
  case Shuffle=3;

The scalar equivalent can be of type int or string, but not a union of int|string, and all cases of a backed enum must declare a scalar value. To demonstrate use the following Backed Enum:

It would produce an error message:

Fatal error: Case Shuffle of backed enum SortType must have a value

The scalar equivalents for backed enum cases must be unique. To demonstrate, use the following script that declares the same scalar equivalent for two enum cases:

The script would result in an error message:

Fatal error: Duplicate value in enum SortType for cases Desc and Shuffle

The scalar equivalents may be literal expressions in addition to being literal values, as example:

All backed enum values, or backed cases, have an additional read-only property called value that has its value as the scalar value of the backed case. For example, the following print statement would output the scalar equivalent value for the Desc case:

print SortType::Desc->value;

Here, value is a read-only property and unmodifiable. The following snippet assigns a variable as a reference to the value property of a backed case:

$sortType = SortType::Desc;
$ref = &$sortType->value;

The variable assignment would generate an error message:

Fatal error: Uncaught Error: Cannot modify readonly property SortType::$value 

Backed enums implement an internal interface BackedEnum that declares two methods:

  • from(int|string): self – Takes a scalar enum value for a backed case and returns the corresponding enum case. Returns a ValueError if the scalar value is not found.
  • tryFrom(int|string): ?self – Takes a scalar enum value for a backed case and returns the corresponding enum case. Returns null if the scalar value is not found.

The following script demonstrates the use of these methods.

echo “
”; $sortType = SortType::tryFrom(4) ?? SortType::Desc; print $sortType->value; echo “
”; $sortType = SortType::from("4");

The output is:

Fatal error: Uncaught ValueError: 4 is not a valid backing value for enum "SortType"

The from() and tryFrom() methods use strict/weak typing modes, the default being weak typing, which implies some implicit conversion. Float and string values for integers get converted to integer values as demonstrated by the following script:

echo "
"; $sortType = SortType::tryFrom("4") ?? SortType::Desc; print $sortType->value; echo "
"; $sortType = SortType::from("2.0"); print $sortType->value;

The output is:


A string that cannot get converted to an integer must not be passed when an int is expected, as in:

$sortType = SortType::from("A");

The preceding would result in a  error message:

Fatal error: Uncaught TypeError: SortType::from(): Argument #1 ($value) must be of type int, string given

In strict typing mode, the type conversion is not applied, and error messages such as the preceding, or the following are generated:

Fatal error: Uncaught TypeError: SortType::from(): Argument #1 ($value) must be of type int, float given

Both pure and backed enums implement an internal interface called UniEnum that provides a static method called cases() that outputs the possible values for the enum, i.e., the enum cases. The following script demonstrates the cases() method.

The output is:

array(3) { [0]=> enum(SortType::Asc) [1]=> enum(SortType::Desc) [2]=> enum(SortType::Shuffle) } 

array(3) { [0]=> enum(BackedSortType::Asc) [1]=> enum(BackedSortType::Desc) [2]=> enum(BackedSortType::Shuffle) }

Enums may include methods and implement an interface

Enums, both pure and backed, may declare methods, similar to class instance methods. Enums may also implement an interface. The enum must implement the interface functions in addition to any other functions. The following script is a variation of the same array sorting example that includes an enum that implements an interface. The enum implements a function  from the interface in addition to a function not belonging to the interface. 

            SortTypeEnum::Desc => 'Desc',
            SortTypeEnum::Shuffle => 'Shuffle',

   public function notFromInterface(): string
        return "Function Not From Interface";
function sortArray(SortType $sortType) { 

$arrayToSort=array("B", "A", "f", "C");
if ($sortType->sortType() == "Asc") {
             foreach ($arrayToSort as $key => $val) {
                echo "$key = $val ";
             }  echo "
"; } elseif ($sortType->sortType() == "Desc") { rsort($arrayToSort); foreach ($arrayToSort as $key => $val) { echo "$key = $val "; } echo "
"; } elseif ($sortType->sortType() == "Shuffle"){ shuffle($arrayToSort); foreach ($arrayToSort as $key => $val) { echo "$key = $val "; } echo "
"; } elseif ($sortType instanceof SortType){ sort($arrayToSort); foreach ($arrayToSort as $key => $val) { echo "$key = $val "; } } } $val = SortTypeEnum::Asc; sortArray(SortTypeEnum::Asc); sortArray(SortTypeEnum::Desc); sortArray(SortTypeEnum::Shuffle); print SortTypeEnum::Asc->notFromInterface();

The output from the script is as follows:

0 = A 1 = B 2 = C 3 = f
0 = f 1 = C 2 = B 3 = A
0 = C 1 = f 2 = B 3 = A
Function Not From Interface 

A backed enum may also implement an interface and provide additional methods as in the following script:

value) {
            'A' => 'Asc',
            'D' => 'Desc',
            'S' => 'Shuffle',

   public function notFromInterface(): string
        return "Function Not From Interface";
function sortArray(SortType $sortType) { 

$arrayToSort=array("B", "A", "f", "C");
    if ($sortType->sortType() == "Asc") {
             foreach ($arrayToSort as $key => $val) {
                echo "$key = $val ";
             }  echo "
"; } elseif ($sortType->sortType() == "Desc") { rsort($arrayToSort); foreach ($arrayToSort as $key => $val) { echo "$key = $val "; } echo "
"; } elseif ($sortType->sortType() == "Shuffle"){ shuffle($arrayToSort); foreach ($arrayToSort as $key => $val) { echo "$key = $val "; } echo "
"; } elseif ($sortType instanceof SortType){ sort($arrayToSort); foreach ($arrayToSort as $key => $val) { echo "$key = $val "; } } } sortArray(SortTypeEnum::Asc); sortArray(SortTypeEnum::Desc); sortArray(SortTypeEnum::Shuffle); print SortTypeEnum::Asc->notFromInterface();

The output is as follows:

0 = A 1 = B 2 = C 3 = f
0 = f 1 = C 2 = B 3 = A
0 = C 1 = f 2 = B 3 = A
Function Not From Interface 

Enums may declare static methods 

An enum may declare static methods. In a variation of the array sorting example, a static method chooseSortType() is used to choose the sort type based on the length of the array to be sorted:

            $arraySize  static::Desc,
            default => static::Shuffle,
function sortArray(array $arrayToSort) { 
    if (SortType::chooseSortType(count($arrayToSort)) == SortType::Asc) {
             foreach ($arrayToSort as $key => $val) {
                echo "$key = $val ";
             }  echo "
"; } elseif (SortType::chooseSortType(count($arrayToSort)) == SortType::Desc) { rsort($arrayToSort); foreach ($arrayToSort as $key => $val) { echo "$key = $val "; } echo "
"; } elseif (SortType::chooseSortType(count($arrayToSort)) == SortType::Shuffle){ shuffle($arrayToSort); foreach ($arrayToSort as $key => $val) { echo "$key = $val "; } echo "
"; } } $arrayToSort=array("B", "A", "f", "C"); sortArray($arrayToSort); $arrayToSort=array("B", "A", "f", "C","B", "A", "f", "C","B", "A", "f", "C"); sortArray($arrayToSort); $arrayToSort=array("B", "A", "f", "C","B", "A", "f", "C","B", "A", "f", "C","B", "A", "f", "C","B", "A", "f", "C","B", "A", "f", "C"); sortArray($arrayToSort);

The output is as follows:

0 = A 1 = B 2 = C 3 = f
0 = f 1 = f 2 = f 3 = C 4 = C 5 = C 6 = B 7 = B 8 = B 9 = A 10 = A 11 = A
0 = A 1 = B 2 = B 3 = C 4 = B 5 = C 6 = f 7 = A 8 = f 9 = C 10 = B 11 = f 12 = f 13 = A 14 = A 15 = B 16 = C 17 = f 18 = A 19 = B 20 = C 21 = f 22 = C 23 = A 


Enums may declare constants

An enum may declare constants. The following script declares a constant called A. 

The constants may refer to enum’s own cases, which are also constants. The following script for sorting an array demonstrates the use of constants that refer to the enum in which they are declared.

 $val) {
                echo "$key = $val ";
             }  echo "
"; } elseif ($sortType == SortType::Desc) { rsort($arrayToSort); foreach ($arrayToSort as $key => $val) { echo "$key = $val "; } echo "
"; } elseif ($sortType == SortType::Shuffle){ shuffle($arrayToSort); foreach ($arrayToSort as $key => $val) { echo "$key = $val "; } } } sortArray(SortType::ASCENDING); sortArray(SortType::DESCENDING); sortArray(SortType::SHUFFLE);

The output is as follows:

0 = A 1 = B 2 = C 3 = f
0 = f 1 = C 2 = B 3 = A
0 = C 1 = B 2 = f 3 = A

Because an enum’s values are constants themselves, an explicit constant may not redefine an enum’s value. We demonstrate this in the following script:

The script generates the following error message:

Fatal error: Cannot redefine class constant SortType::Asc 

A enum’s case value must be compile-time evaluable, as  the following script declaring an enum case as a constant demonstrates.

An error message is generated:

Fatal error: Enum case value must be compile-time evaluatable

Enums with traits

Enums may use traits. The following script for sorting an array declares a trait called ChooseSortType and uses the trait in an enum.

            $arraySize  SortType::Desc,
            default => SortType::Shuffle,


enum SortType {
  use ChooseSortType;

  case Asc;
  case Desc;
  case Shuffle;
function sortArray(SortType $sortType, array $arrayToSort) { 
    if ($sortType->chooseSortType(count($arrayToSort)) == SortType::Asc) {
             foreach ($arrayToSort as $key => $val) {
                echo "$key = $val ";
             }  echo "
"; } elseif ($sortType->chooseSortType(count($arrayToSort)) == SortType::Desc) { rsort($arrayToSort); foreach ($arrayToSort as $key => $val) { echo "$key = $val "; } echo "
"; } elseif ($sortType->chooseSortType(count($arrayToSort)) == SortType::Shuffle){ shuffle($arrayToSort); foreach ($arrayToSort as $key => $val) { echo "$key = $val "; } echo "
"; } } $arrayToSort=array("B", "A", "f", "C"); sortArray(SortType::Desc,$arrayToSort); $arrayToSort=array("B", "A", "f", "C","B", "A", "f", "C","B", "A", "f", "C"); sortArray(SortType::Asc,$arrayToSort); $arrayToSort=array("B", "A", "f", "C","B", "A", "f", "C","B", "A", "f", "C","B", "A", "f", "C","B", "A", "f", "C","B", "A", "f", "C"); sortArray(SortType::Desc,$arrayToSort);

The output is as follows:

0 = A 1 = B 2 = C 3 = f
0 = f 1 = f 2 = f 3 = C 4 = C 5 = C 6 = B 7 = B 8 = B 9 = A 10 = A 11 = A
0 = B 1 = A 2 = C 3 = f 4 = B 5 = A 6 = B 7 = A 8 = B 9 = A 10 = f 11 = A 12 = C 13 = B 14 = f 15 = f 16 = C 17 = f 18 = C 19 = B 20 = C 21 = C 22 = A 23 = f 

How are enums different from classes

While we mentioned that enums are similar to classes, they are different in many regards:

  • Enums are serialized differently from objects
  • Enums don’t have a state, which a class object does
  • Enums don’t declare constructors as no object initialization is needed
  • Enums can’t extend other enums; that is, no inheritance
  • Object and static properties are not supported
  • Enums can’t be instantiated with the new operator
  • The print_r output is different as compared to class objects

To demonstrate one of these differences, consider the following script in which an enum declares a class property:

The script generates error message:

Fatal error: Enums may not include properties

To demonstrate another of these differences, consider the following script in which an enum is instantiated:

The script generates an error message:

Fatal error: Uncaught Error: Cannot instantiate enum SortType

To demonstrate another difference, the print_r output for a pure enum, and a backed enum is listed by script:

The output is :

SortType Enum ( [name] => Asc ) 
BackedSortType Enum:int ( [name] => Desc [value] => 2 )

Enums may nor declare a __toString method. To demonstrate use the following script in which an enum implements the Stringable interface and provides implementation for the __toString method. 

The script generates an error message:

Fatal error: Enum may not include __toString

In this article we discussed most of the class-related features in PHP 8, including enums, the new readonly modifier for class properties, and constructor parameter promotion.

In the next article in the series we will explore functions and methods related new features. …

This article is part of the article series “PHP 8.x”. You can subscribe to receive notifications about new articles in this series via RSS.

PHP continues to be one of the most widely used scripting languages on  the web with 77.3% of all the websites whose server-side programming language is known using it according to w3tech. PHP 8 brings many new features and other improvements, which we shall explore in this article series.

Extinguishing IT Team Burnout through Mindfulness and Unstructured Time

Key Takeaways

  • There is an IT talent crisis and a clear need to prevent burnout to retain skilled employees
  • Burnout has a huge impact and takes a serious toll on a business and its people
  • There is a direct correlation between mindfulness and productivity
  • IT leaders can avoid draining creativity and morale by building a healthy business culture that emphasizes mindfulness
  • There are some exercises which help IT spur employee creativity and generate ideas that drive benefits like greater efficiency

With fears of a looming recession, many IT leaders are yet again facing a new reality, one with fewer resources and budget cuts. This will inevitably stretch IT organizations that already spent much of the past two years supporting rapidly shifting priorities and platforms. While digital transformation was significantly accelerated, it also led to the rapid burnout of many tech pros. Looking ahead, a reduction in staff and budget will only exacerbate the problem.

For those questioning the severity of these issues, the story is in the data:

  • According to a recent Robert Half survey of 2,400 professionals in the U.S., 4 in 10 U.S. workers report an increase in burnout. Nearly topping the list of those feeling the strain are technology workers.
  • According to Gartner’s 2021-2023 Emerging Technology Roadmap for Large Enterprises, 64% of IT executives cite talent shortages as the most significant barrier to adopting emerging technology, compared to 4% in 2020.
  • The Bureau of Labor Statistics reports that software developer jobs will grow more than 22% by 2029 – faster than all other occupations combined – creating a 1.2 million+ shortage by 2026.

With more technology workers experiencing burn-out, and a market-driven reduction in resources, IT leaders are under increasing pressure to alleviate the burden on their teams. Why? Burnout dampens worker creativity, negatively affecting productivity and problem-solving, both of which have a direct impact on the bottom line.

The answer to helping is simple: Build a healthy business culture that emphasizes practicing mindfulness complemented with unstructured creative time. And it isn’t just a Silicon Valley buzzword or some kind of New Age fad. Mindfulness has been shown in randomized, controlled trials to benefit mental health. According to the Cleveland Clinic, a study of 20 different trials showed that mindfulness practice “showed demonstrated improvements in overall mental health, as well as the benefit for reducing the risk of relapse from depression. Similarly, substantial evidence exists that mindfulness has a positive impact on anxiety disorders such as post-traumatic stress disorder.” 

Further, according to the American Psychological Association, multiple studies show that mindfulness can reduce stress and decrease levels of depression and anxiety. And these aren’t the only benefits. Studies show it can improve memory, sharpen focus and produce less emotional reactivity.

So, how can a technology organization apply mindfulness to reduce burnout and improve people’s state of well-being on the job? It’s starts with leadership encouraging an honest self-assessment.

Ask yourself, do you find yourself resisting new tasks being assigned to you? Are you tensing up during stand-ups? Are you avoiding conversations with your leader? Are you about to cancel a meeting to alleviate anxiety? Answering yes to any one of these questions may signal that you’re heading toward burnout. 

State of mind

While a focus on mindfulness may seem more at place in a monastery or yoga class, it’s definitely an exercise that anyone can practice virtually anywhere — and it can greatly influence a  company’s culture and success. 

Introducing mindfulness into an organization doesn’t require a group activity. Certainly, people will benefit from some initial and even ongoing training, but this can be done in a number of ways, ranging from collective training sessions to self-directed videos and reading materials. Practicing mindfulness in the presence of others through meditation, for instance, can be a powerful, beneficial experience. Still, mindfulness is, at its heart, an individual activity, and the opportunities for it to come into play in the workplace are many. We can practice mindfulness during stand-up meetings, daily check-ins with our team, even as we pick up a new development task.

Paying attention to how we react to external factors is really all it takes to start down the path of mindfulness.

Growing awareness

Mindfulness is fundamentally about awareness. For it to grow, begin by observing your mental state of mind, especially when you find yourself in a stressful situation. Instead of fighting emotions, observe your mental state as those negative ones arise. Think about how you’d conduct a deep root cause analysis on an incident and apply that same rigor to yourself. The key to mindfulness is paying attention to your reaction to events without judgment. This can unlock a new way of thinking because it accepts your reaction, while still enabling you to do what is required for the job. This contrasts being stuck behind frustration or avoiding new work as it rolls in.

For example, if you’ve just received a new assignment and you can tell you are feeling stressed, stop what you’re doing, close your eyes and take a few deep breaths. Focus on the breaths at first, but then pay attention to the rest of your body. Are you tense? If so, where? Your shoulders? Your back? Your temples? Is your stomach upset? Emotions have a physical component to them, and paying attention to their effect on the body helps us to understand them more deeply. 

Don’t fall into judgment about the emotions you’re feeling while observing them. You may want to dismiss anxiety as ridiculous, for instance, or think about how unfair it is that your workload is making you feel this way. Instead, focus on the emotion itself. Don’t resist it, but also don’t let yourself get swept up in it. Doing either of these things gives the emotional energy that will sustain it. Eventually, as you observe, it will pass away into something else.

The power of mindfulness is that it enables us to observe our physical, emotional and cognitive experiences in an objective way. As a result, we are less likely to be swept up and can maintain ourselves in a more effective way. The next time your boss gives you an assignment, you may notice the tension in your shoulders and can say, “That’s anxiety.” Knowledge is power, and a keener awareness of exactly what emotions one is experiencing makes it far easier to regulate and manage them.

Unplug or unhinge

Mindfulness is an individual pursuit, while creativity is an enterprise pursuit, and providing space for employees to be creative is another key to preventing burnout. But there are other benefits as well. 

There is a direct correlation between creativity and productivity. Teams that spend all their time working on specific processes and problems struggle to develop creative solutions that could move a company forward. That said, rather than allowing their workers to lose agility, innovation and productivity, IT leaders should require personnel to step away from day-to-day tasks for time to think about problems in new ways.

It’s important for this time to be unstructured and low-pressure, because when the brain operates under stress, it will instinctively seek the fastest, most efficient, but not always the most effective solution. The mind wants to solve the problem and move on to the next one. This is especially true in a business environment. Workers that face a significant backlog of projects experience this cycle repeatedly, which exposes their minds and bodies to prolonged strain and stress. This pressure isn’t conducive to creative, out-of-the-box thinking, and that leads to burnout.

Ironically, for this to work, managers need to create a structure for unstructured time. It requires leadership. When someone is overloaded, the last thing they want to do is take time away from the crushing workload to do exploratory work, because then it feels like they’re simply falling further behind. 

It’s no different than worrying about a system that carries a lot of technical debt. Whether you like it or not, you have to allocate time in every sprint focusing on that technical debt. So, when it comes to unstructured creative work, leadership has to give people this time in a way that doesn’t add to their stress levels. 

One way I create unstructured, creative time for my teams is to give them an exploratory project within a time-boxed window. Typically, I’ll ask them to address a specific problem, but the solution is completely up to them in whatever way they think makes sense. For example, I recently asked the team to re-imagine the data entry screen for our sales organization, with the goal of reducing data entry by five clicks. They could prepopulate fields, create shortcuts, introduce additional automation — really, it was entirely up to them. This is totally different from the way we typically assign work, where it is more prescriptive and requires less creativity.

The timing of these projects, however, is critical. If I were to give my team an exploratory project without taking their overall task burden into account, I’d just create more stress instead of giving them a creative outlet. Adjust timelines and deadlines so that team members can truly settle into unstructured, creative time.

Nurturing mindfulness and creativity benefits the business and individual workers alike. Creativity must be an institutional value, while mindfulness must be an individual pursuit. Leadership should guide the way, taking responsibility for fostering an environment that encourages learning and collaborative problem-solving. As it does, IT professionals gain more insight, deepen their expertise and solve increasingly complex challenges. This heightened mindset is invaluable, leading to more significant innovation within your organization.

Finally, forward-thinking leaders can identify areas to make IT more efficient. The more teams can focus on improving a product for end-users, the less you have to worry about constant disruptions and redesigns. Revisiting how we work allows us to simplify or discontinue outdated or dysfunctional processes. 

What could be more productive than learning to do more with fewer steps? Accomplishing more with less work takes the pressure off of everyone, and leaves more room for the kind of unstructured, creative work that not only prevents burnout, but it also moves the organization forward, often in unexpected ways. 

There is no denying that IT organizations that are understaffed and constantly trying to address urgent issues might feel creative time is a luxury, and mindfulness, a nice-to-have. However, keeping workers operating at full speed every day is dangerous. It leaves little time for teams to think outside the box, take risks or spawn creative solutions to those longstanding, everyday problems that continually drain resources and people.

Giving employees the resources and time to practice mindfulness, engage in unstructured creative work and collaborate in a more efficient, streamlined way goes a long way toward reducing burnout and fostering a productive, less stressful work environment. We owe it to ourselves and to our people to make the changes required so that the technology industry is a more humane place to be.

Java Champion Josh Long on Spring Framework 6 and Spring Boot 3

Key Takeaways

  • Microservices are an opportunity to show where Java lags behind other languages.
  • Reactive programming provides a concise DSL to express the movement of state and to write concurrent, multithreaded code with better scaling.
  • Developing in Spring Boot works well even without special tooling support.
  • Regarding current Java developments, Josh Long is most excited about Virtual Threads in Project Loom, Java optimization in Project Leyden, and Foreign-Function access in Project Panama.
  • Josh Long wishes for real lambdas in Java — structural lambdas — and would like to revive Spring Rich, a defunct framework for building desktop Swing-powered client applications.

VMware released Spring Framework 6 and Spring Boot 3. After five years of Spring Framework 5, these releases start a new generation for the Spring ecosystem. Spring Framework 6 requires Java 17 and Jakarta EE 9 and is compatible with the recently released Jakarta EE 10. It also embeds observability through Micrometer with tracing and metrics. Spring Boot 3 requires Spring Framework 6. It has built-in support for creating native executables through static Ahead-of-Time (AOT) compilation with GraalVM Native Image. Further details on these two releases may be found in this InfoQ news story.

InfoQ spoke with Josh Long, Java Champion and first Spring Developer Advocate at VMware, about these two releases. Juergen Hoeller, Spring Framework project lead at VMware, contributed to one answer.

InfoQ: As a Spring Developer Advocate, you give talks, write code, publish articles and books, and have a podcast. What does a typical day for Josh Long look like?

Josh Long: It’s hard to say! My work finds me talking to all sorts of people, both in person and online, so I never know where I’ll be or what work I’ll focus on. Usually, though, the goal is to advance the will of the ecosystem. So that means learning about their use cases and advancing solutions to their problems. If that means talking to the Spring team and/or sending a pull request, I’ll happily do that. If it means giving a presentation, recording a podcast, writing an article or a book or producing a video, then I’ll do that.

InfoQ: VMware gets feedback about Spring from many sources: conferences, user groups, issue trackers, Stack Overflow, Slack, Reddit, Twitter, and so on. But happy users typically stay silent, and the loudest complainers may not voice essential issues. So, how does VMware collect and prioritize user feedback?

Long: This is a very good question: everything lands in GitHub, eventually. We pay special attention to StackOverflow tags and do our best to respond to them, but if a bug is discovered there, it ultimately lands in GitHub. GitHub is a great way to impact the projects. We try to make it easy, like having labels for newcomers who want to contribute to start somewhere where we could mentor them. GitHub, of course, is not a great place for questions and answers — use Stackoverflow for that. Our focus on GitHub is so great that even within the teams themselves, we send pull requests to our own projects and use that workflow.

InfoQ: There are many projects under the Spring umbrella. VMware has to educate Spring users about all of them. How does VMware know what Spring users don’t know so it can teach them?

Long: In two words: we don’t. We can surmise, of course. We spend a lot of effort advancing the new, novel, the latest and greatest. But we also are constantly renewing the fundamental introductory content. You wouldn’t believe how many times I’ve redone the “first steps in…” for a particular project 🙂 We’re also acutely aware that while people landing on our portals and properties on the internet might be invested long-time users, people finding Spring through other means may know less. So are constantly putting out the “your first steps in…” introductory content. And anyway, sometimes “the first steps in…” changes enough that the fundamentals become new and novel 🙂 

InfoQ: Java legacy applications often use older versions of Java and frameworks. Microservices allow developers to put new technology stacks into production at a lower risk. Do you see this more as an opportunity for Java to showcase new features and releases? Or is it more of a threat because developers can test-drive Java competitors like .NET, Go, JavaScript or Python?

Long: Threat? Quite the contrary: if Java reflects poorly when viewed through the prism of other languages, then it’s better for that to be apparent and to act as a forcing function to propel Java forward. And, let’s be honest: Java can’t be the best at everything. Microservices mean we can choose to use Spring and Java for all the use cases that make sense — without feeling trapped in case Java and Spring don’t offer the most compelling solution. Don’t ask me what that use case is because I have no idea…

InfoQ: Spring 5 added explicit Spring support for Kotlin. In your estimate, what percentage of Spring development happens in Kotlin these days?

Long: I don’t know. But it’s the second most widely used language on the Spring Initializr.

InfoQ: Scala never got such explicit support in Spring. Why do you think that is?

Long: It did! We had a project called Spring Scala way back in 2012.  We really wanted it to work. Before we announced Spring Scala, we even had a Spring Integration DSL in Scala. We tried. It just seems like there wasn’t a community that wanted it to work. Which is a pity. These days, with reactive and functional programming so front-and-center, I feel like the Java and Scala communities have more in common than ever.

InfoQ: Spring 5 also added reactive applications. Now you’re a proponent of reactive applications and even wrote a book about it. What makes reactive applications so attractive to you?

Long: I love reactive programming. It gives me three significant benefits:

  • A concise DSL in which to express the movement of state in a system — in a way that robustly addresses the volatile nature of systems through things like backpressure, timeouts, retries, etc. This concise DSL simplifies building systems, as you end up with one abstraction for all your use cases.
  • A concise DSL in which to write concurrent, multithreaded code — free of so much of the fraught threading and state-management logic that bedevils concurrent code.
  • An elegant way to write code in such a way that the runtime can better use threads to scale (i.e., handle more requests per second).

InfoQ: For which problems or applications is reactive development a perfect fit?

Long: If reactive abstractions are suitable to your domain and you want to learn something new, reactive programming is a good fit for all workloads. Why wouldn’t you want more scalable, safer (more robust), and more consistent code?

InfoQ:  Where is reactive development not a good fit?

Long: Reactive development requires a bit of a paradigm change when writing code. It’s not a drop-in replacement or a switch you can just turn on to get some scalability like Project Loom will be. If you’re not interested in learning this new paradigm, and you’re OK to do without the benefits only reactive programming can offer, then it makes no sense to embrace it.

InfoQ: Common complaints about reactive development are an increased cognitive load and more difficult debugging. How valid are these complaints in Spring Framework 6 and Spring Boot 3?

Long: I don’t know that we’re doing all that much to address these concerns directly in Spring Boot 3. The usual mechanisms still work, though! Users can put breakpoints in parts of a reactive pipeline. They can use the Reactor Tools project to capture a sort of composite stack trace from all threads in a pipeline. They can use the .log() and .tap() operators to get information about data movement through the pipeline, etc. Spring Boot 3 offers one notable improvement: Spring now supports capturing both metrics and trace information through the Micrometer Metrics and Micrometer Tracing projects. Reactor even has new capabilities to support the new Micrometer Observation abstraction in reactive pipelines.

InfoQ: How important is tool support (such as IDEs and build tools) for the success of a framework? At least experienced users often bypass wizards and utilities and edit configuration files and code directly.

Long: This is a fascinating question. I have worked really hard to make the case that tooling is not very important to the experience of the Spring Boot developer. Indeed, since Spring Boot’s debut, we’ve supported writing new applications with any barebones Java IDE. You don’t need the IntelliJ IDEA Ultimate Edition, specialized support for Spring XML namespaces, or even the Java EE and WTP support in Eclipse to work with Spring Boot. If your tool supports public static void main, Apache Maven or Gradle, and the version of Java required, then you’re all set! 

And there are some places where Spring Boot got things that might benefit from tooling, like ye ole and application.yaml. But even here, you don’t need tooling: Spring Boot provides the Spring Boot Actuator module, which gives you an enumeration of all the properties you might use in those files.

That said: it doesn’t hurt when everything’s literally at your fingertips. Good tooling can feel like it’s whole keystrokes ahead of you. Who doesn’t love that? To that end, we’ve done a lot of work to make the experience for Eclipse and VS Code (and, by extension, most tools that support the Eclipse Java Language Server) developers as pleasant as possible. 

I think good tooling is even more important as it becomes necessary to migrate existing code. A good case in point is the new Jakarta EE APIs. Jakarta EE supersedes what was Java EE: All javax.*  types have been migrated to jakarta.*. The folks at the Eclipse Foundation have taken great pains to make on-ramping to these new types as easy as possible, but it’s still work that needs to be done. Work, I imagine, your IDE of choice will make it much easier.

InfoQ: For the first time since 2010, a Spring Framework update followed not one, but two years after the previous major release – version 5.3 in 2020. So it seems Spring Framework 6 had two years of development instead of one. What took so long? 🙂  

Long: Hah. I hadn’t even noticed that! If I’m honest, it feels like Spring Framework 6 has been in development for a lot longer than two years. This release has been one of incredible turmoil! Moving to Java 17 has been easy, but the migration to Jakarta EE has been challenging for us as framework developers. First, we had to sanitize all of our dependencies across all the supported Spring Boot libraries. Then we worked with, waited for, and integrated all the libraries across the ecosystem, one by one, until everything was green again. It was painstaking and slow work, and I’m glad it’s behind us. But if we’ve done our jobs right, it should be trivial for you as a developer consuming Spring Boot.

The work for observability has also been widespread. The gist of it is that Micrometer now supports tracing, and there’s a unified abstraction for both tracing and metrics, the Observation. Now for some backstory. In Spring Boot 2.x, we introduced Micrometer to capture and propagate metrics to various time-series databases like Netflix Atlas, Prometheus, and more. Spring Framework depends on Micrometer. Spring Boot depends on Spring Framework. Spring Cloud depends on Spring Boot. And Spring Cloud Sleuth, which supports distributed tracing, depends on Spring Cloud. So supported metrics at the very bottom of the abstraction stack and distributed tracing at the very top.

This arrangement worked, for the most part. But it meant that we had two different abstractions to think about metrics and tracing. It also meant that Spring Framework and Spring Boot couldn’t support instrumentation for distributed tracing without introducing a circular dependency. All of that changes in Spring Boot 3: Spring Framework depends on Micrometer, and Micrometer supports both tracing and metrics through an easy, unified abstraction. 

And finally, the work for Ahead-of-Time (AOT) compilation with GraalVM Native Image landed officially in Spring Framework 6 (released on November 15, 2022). It has been in the works in some form or another since at least 2019. It first took the form of an experimental research project called Spring Native, where we proved the various pieces in terms of Spring Boot 2.x and Spring Framework 5.x. That work has been subsumed by Spring Framework 6 and Spring Boot 3. 

InfoQ:  As announced last year, the free support duration for Spring Framework 6.0 and 6.1 will be shorter. Both are down 20% to 21.5 months, compared to 27 months for Spring 5.2. In contrast, the free support duration for Spring Boot 3.0 remains one year. Why is that?

Long: We standardized the way support is calculated in late 2021. We have always supported open-source releases for 12 months for free. Each project can extend that support based on release cycles and their community needs, but 12 months of open-source support and 12 months of additional commercial support is what all projects have as the minimum. It’s normal for us to further extend support for the last minor release in a major generation (as we are doing with Spring Framework 5.3.x).

It’s important to note that the standardization of support timelines happened at the end of 2021. We had zero major or minor Spring Framework releases since that happened. Spring Framework 6 will be the first under the new guidelines.

Juergen Hoeller: It’s worth noting that the commercial support timeframe for Spring Framework 6.0 and 6.1 is shorter as well. We are not shortening the support of open-source releases in favor of commercial releases. Rather, it’s all a bit tighter — the expectation is that people upgrade to the latest 6.x feature releases more quickly. Just like they also should be upgrading their JDK more quickly these days. In that sense, Spring Framework 5.x was still very much attached to the JDK 8 usage style of “you may stay on your JDK level and Java EE level.” Spring Framework 6.x is meant to track JDK 17+ and Jakarta EE 9+ (both release more often than before) as closely as possible, adapting the release philosophy accordingly. 

InfoQ: Spring Boot 3 supports the GraalVM Native Image AOT compiler out of the box. This produces native Java applications that start faster, use less memory, have smaller container images, and are more secure. In which areas of cloud computing does this put Java on more equal footing against competitors such as Go?

Long: I don’t know that I’d characterize Java as less or more on equal footing with Go. Regardless of Go, Java hasn’t been the most memory-efficient language. This has foreclosed on some opportunities like IoT and serverless. AOT compilation with GraalVM Native Image puts it in the running while retaining Java’s vaunted scalability and productivity.

InfoQ: In which areas of cloud computing will native Java not move the needle?

Long: I don’t know. It feels like GraalVM Native Image will be a suitable replacement for all the places where the JRE might have otherwise been used. Indeed, GraalVM opens new doors, too. Developers can write custom Kubernetes controllers using Spring Boot now. You can write operating-system-specific client binaries like CLIs (hello, Spring Shell!).

InfoQ: Downsides of native Java are a slower, more complex build pipeline, less tool support, and reduced observability. The build pipeline disadvantages seem unavoidable — AOT compilation takes longer, and different operating systems need different executables. But how do you think tool support and observability in native Java will compare against dynamic Java in the medium term?

Long: IntelliJ already has fantastic support for debugging GraalVM native images. I don’t think most people will mourn the loss of Java’s vaunted portability. After all, most applications run in a Linux container running on a Linux operating system on a Linux host. That said, there is a fantastic GitHub Action that you can use to do cross-compilation, where the build runs on multiple operating systems and produces executables specific to those operating systems. You can use tools like Buildpacks (which Spring Boot integrates with out of the box, e.g.: mvn -Pnative spring-boot:build-image) to build and run container images on your macOS or Windows hosts. GraalVM’s observability support has been hampered a bit because Java agents don’t run well (yet) on in native executables. But, the aforementioned Micrometer support can sidestep a lot of those limitations and yield a more exhaustive result.

InfoQ: Talking about observability: That’s another headline feature of Spring 6. It encompasses logging, metrics, and traces and is based on Micrometer. Java has many observability options already. Why bake another one into Spring? And why now?

Long: Java doesn’t really have a lot of things that do what Micrometer does. And we’re not baking another one — we’re enhancing an existing one that predates many distinct and singly focused alternatives. Micrometer has become a de-facto standard. Many other libraries already integrate it to surface metrics:

  • RabbitMQ Java client
  • Vert.x?
  • Hibernate
  • HikariCP
  • Apache Camel
  • Reactor
  • RSocket
  • R2DBC
  • DS-Proxy
  • OpenFeign
  • Dubbo
  • Skywalking
  • Resilience4J (in-progress)
  • Neo4J

InfoQ: How can I view and analyze the observability data from Spring 6 and Spring Boot 3 besides reading the data files directly?

Long: Micrometer provides a bevy of integrations with metrics tools like Graphite, Prometheus, Netflix Atlas, InfluxDB, Datadog, etc. It works with distributed tracing tools like OpenZipkin. It also integrates with OpenTelemetry (“OTel”), so you can speak to any OTel service.

InfoQ: Spring Boot 3 won’t fully support Native Java and observability in all its projects and libraries at launch. How will I know if my Spring Boot 3 application will work in native Java and provide complete observability data?

Long: This is only the beginning of a longer, larger journey. The surface area of the things that work well out-of-the-box with GraalVM Native Image grows almost daily. There’s no definitive list, but you should know that all the major Spring projects have been working on support. It’s our priority. Check out our Spring AOT Smoke Tests to see which core projects have been validated.

InfoQ: Which upcoming feature of Java excites you the most?

Long: I am super excited about three upcoming bodies of work: Project Loom, Project Leyden, and Project Panama. Project Loom brings lightweight green threads to the JVM and promises to be a boon to scalability. Project Leyden seems like it’ll give the application developer more knobs and levers to constrain and thus optimize their JVM applications. One of the more dramatic constraints looks to be GraalVM Native Images. And Project Panama looks to finally make Foreign-Function access as pain-free as it is in languages like Python, Ruby, PHP, .NET, etc. These three efforts will bring Java to new frontiers.

InfoQ: If you could make one change to Java, what would that be?

Long: Structural lambdas! I want real lambdas in Java. Right now, lambdas are a bit more than syntax sugar around single-abstract method interfaces. All lambdas must conform to a well-known single abstract method (SAM) interface, like java.util.function.Function. This was fine before Java added the var keyword, which I love. But it’s aesthetically displeasing now because of the need to tell the compiler to which interface a given lambda literal conforms. 

Here’s some code in Kotlin:

val name = "Karen" // a regular variable of type String
val myLambda: (String) -> Int = { name -> name.length } // a lambda taking a string and returning an int

Here’s the equivalent code in Java:

var name = "Karen";
var myLambda = new Function() {
  public Integer apply(String s) {
    return s.length();

There are ways around this: 

var name = "Karen";
Function myLambda = s -> s.length(); 

This is what I mean by it being aesthetically displeasing: either I abandon the consistency of having both lines start with var, or I abandon the conciseness of the lambda notation. 

Is this likely to ever get fixed? Probably not. Is it a severe issue? Of course not. On the whole, Java’s a fantastice language. And most languages should be lucky to have gotten to Java’s ripe old age with as few idiosyncratic syntax oddities as it has!

InfoQ: And what would your one change to Spring or Spring Boot be?

Long: This is a tough one! I wish we could bring back and renew Spring Rich, a now long-since defunct framework for building desktop Swing-powered client applications. Griffon is the only thing that addresses this space. It’s a shame, because Spring could be great here, especially now that it has deeply integrated GraalVM Native Image support. Admittedly, this is probably a niche use case, too 🙂 

InfoQ: Josh, thank you for this interview.


DynamoDB Data Transformation Safety: from Manual Toil to Automated and Open Source

Key Takeaways

  • Data is the backbone of many SaaS-based services today.
  • With the dynamic nature of data and cloud services, data transformation is a common need due to changing engineering requirements.
  • Data transformation remains a continuous challenge in engineering and built upon manual toil.
  • There is a current lack of tools to perform data transformations programmatically, in an automated way and safely.
  • The open source utility Dynamo Data Transform was built to simplify and build safety and guardrails into data transformation for DynamoDB based systems – built upon a robust manual framework that was then automated and open sourced.


When designing a product to be a self-serve developer tool, there are often constraints – but likely one of the most common ones is scale. Ensuring our product, Jit – a security-as-code SaaS platform, was built for scale was not something we could embed as an afterthought, it needed to be designed and handled from the very first line of code.

We wanted to focus on developing our application and its user experience, without having challenges with issues and scale be a constant struggle for our engineers. After researching the infrastructure that would enable this for our team – we decided to use AWS with a serverless-based architecture.  

AWS Lambda is becoming an ever-popular choice for fast-growing SaaS systems, as it provides a lot of benefits for scale and performance out of the box through its suite of tools, and namely the database that supports these systems, AWS’s DynamoDB.

One of its key benefits is that it is already part of the AWS ecosystem, and therefore this abstracts many of the operational tasks of management and maintenance, such as maintaining connections with the database, and it requires minimal setup to get started in AWS environments.

As a fast-growing SaaS operation, we need to evolve quickly based on user and customer feedback and embed this within our product. Many of these changes in application design have a direct impact on data structures and schemas.

With rapid and oftentimes significant changes in the application design and architecture, we found ourselves needing to make data transformations in DynamoDB very often, and of course, with existing users, it was a priority that this be achieved with zero downtime. (In the context of this article Data Transformation will refer to modifying data from state A to state B).

Challenges with Data Transformation

In the spirit of Brendon Moreno from the UFC:

Maybe not today, maybe not tomorrow, and maybe not next month, but only one thing is true, you will need to make data transformations one day, I promise.

Yet, while data transformation is a known constant in engineering and data engineering, it remains a pain point and challenge to do seamlessly. Currently, in DynamoDB, there is no easy way to do it programmatically in a managed way, surprisingly enough.

While there are many forms of data transformation, from replacing an existing item’s primary key to adding/removing attributes, updating existing indexes – and the list goes on (these types are just a few examples), there remains no simple way to perform any of these in a managed and reproducible manner, without just using breakable or one-off scripting.

User Table Data Transform Example

Below, we are going to dive into a real-world example of a data transformation process with production data.

Let’s take the example of splitting a “full name” field into its components “first name” and “last name”. As you can see in the example below, the data aggregation currently writes names in the table with a “full name” attribute. But let’s say we want to transform from a full name, and split this field into first and last name fields.









Looks easy, right?  Not so, to achieve just this simple change these are the steps that will need to be performed on the business logic side, in order to successfully transform this data.

  • Scanning the user records
  • Extracting the FullName attribute from each record
  • Splitting the FullName attribute into new FirstName and LastName attributes
  • Saving the new records 
  • Cleaning up the FullName attribute

But let’s discuss some of the issues you would need to take into account before you even get started, such as – how do you run and manage these transformations in different application environments? Particularly when it’s not really considered a security best practice to have access to each environment.  In addition, you need to think about service dependencies.  For example, what should you do when you have another service dependent on this specific data format? Your service needs to be backward compatible and still provide the same interface to external services relying on it.

When you have production clients, possibly one of the most critical questions you need to ask yourself before you modify one line of code is how do you ensure that zero downtime will be maintained?

Some of the things you’d need to plan for to avoid any downtime is around testing and verification. How do you even test your data transformation script? What are some good practices for running a reliable dry run of a data transformation on production data?

There are so many things to consider before transforming data.

Now think that this is usually, for the most part, done manually.  What an error-prone, tedious process! It looks like we need a fine-grained process that will prevent mistakes and help us to manage all of these steps.

To avoid this, we understood we’d need to define a process that would help us tackle the challenges above.

The Rewrite Process

Figure 1: Rewrite Process Flow Chart

First, we started by adjusting the backend code to write the new data format to the database while still keeping the old format, by first writing the FullName, FirstName and LastName to provide us some reassurance of backward compatibility. This would enable us to have the ability to revert to the previous format if something goes terribly wrong.

​​async function createUser(item) {
   // FullName = 'Guy Br'
   // 'Guy Br'.split(' ') === ['Guy', 'Br']
   // Just for the example assume that the FullName has one space between first and last name
   const [FirstName, LastName] = item.FullName.split(' ');
   const newItemFormat = { ...item, FirstName, LastName };
   return dynamodbClient.put({
       TableName: 'Users',
       Item: newItemFormat,

Link to GitHub

Next, we wrote a data transformation script that scans the old records and appends the FirstName and LastName attributes to each of them, see the example below:

async function appendFirstAndLastNameTransformation() {
  let lastEvalKey;
  let scannedAllItems = false;

  while (!scannedAllItems) {
    const { Items, LastEvaluatedKey } = await dynamodbClient.scan({ TableName: 'Users' }).promise();
    lastEvalKey = LastEvaluatedKey;

    const updatedItems = => {
      const [FirstName, LastName] = splitFullNameIntoFirstAndLast(item.FullName);
      const newItemFormat = { ...item, FirstName, LastName };
      return newItemFormat;

    await Promise.all( (item) => {
      return dynamodbClient.put({
        TableName: 'Users',
        Item: item,

    scannedAllItems = !lastEvalKey;

Link to GitHub

After writing the actual script (which is the easy part), we now needed to verify that it actually does what it’s supposed to.  To do so, the next step was to run this script on a test environment and make sure it works as expected. Only after the scripts usability is confirmed, it could be run on the application environments.

The last phase is the cleanup, this includes taking the plunge and ultimately deleting the FullName column entirely from our database attributes. This is done in order to purge the old data format which is not used anymore, and reduce clutter and any future misuse of the data format.

async function cleanup() {
  let lastEvalKey;
  let scannedAllItems = false;

  while (!scannedAllItems) {
    const { Items, LastEvaluatedKey } = await dynamodbClient.scan({ TableName: 'Users' }).promise();
    lastEvalKey = LastEvaluatedKey;

    const updatedItems = => {
      delete item.FullName;
      return item;

    await Promise.all( (item) => {
      return dynamodbClient.put({
        TableName: 'Users',
        Item: item,

    scannedAllItems = !lastEvalKey;

Link to GitHub

Lets quickly recap what we have done in the process:

  • Adjusted the backend code to write in the new data format
  • Created a data transformation script that updates each record
  • Validated that script against a testing environment
  • Ran the script on the application environments
  • Cleaned up the old data

This well-defined process helped us to build much-needed safety and guardrails into our data transformation process. As we mentioned before, with this process we were able to avoid downtime by keeping the old format of the records until we don’t need them anymore. This provided us with a good basis and framework for more complex data transformations.

Transforming Existing Global Secondary Index (GSI) using an External Resource

Now that we have a process––let’s be honest, real-world data transformations are hardly so simple.  Let’s assume, a more likely scenario, that the data is actually ingested from an external resource, such as the GitHub API, and that our more advanced data transformation scenario actually requires us to ingest data from multiple sources.  

Let’s take a look at the example below for how this could work.

In the following table, the GSI partition key is by GithubUserId.

For the sake of this data transformation example, we want to add a “GithubUsername” column to our existing table.













This data transformation looks seemingly as straightforward as the example with the full name, but there is a little twist.

How can we get the Github username if we don’t have this information? We have to use an external resource, in this case, it’s the Github API.

GitHub has a simple API for extracting this data (you can read the documentation here). We will pass the GithubUserId and get information about the user which contains the Username field that we want.

The naive flow is similar to the full name example above:

  • Adjust our code to write in the new data format.
  • Assume that we have the Github username when creating a user.
  • Scan the user records (get `GithubUsername` by `GithubUserId` for each record using Github API), and update the record. 
  • Run that script on the testing environment
  • Run it on the application environments

However, in contrast to our previous flow, there is an issue with this naive flow. The flow above is not safe enough. What happens if you have issues while running the data transformation when calling the external resource? Perhaps the external resource will crash / be blocked by your IP or is simply unavailable for any other reason? In this case, you might end up with production errors or a partial transformation, or other issues with your production data.

What can we do on our end to make this process safer?

While you can always resume the script if an error occurs or try to handle errors in the script itself, however, it is important to have the ability to perform a dry run with the prepared data from the external resource before running the script on production. A good way to provide greater safety measures is by preparing the data in advance.

Below is the design of the safer flow:

  • Adjust our code to write in the new data format (create a user with GithubUsername field)
  • Create the preparation data for the transformation

Only after we do this, we scan the user records, get GithubUsername for each of them using Github API, append it to a JSON Object `{ [GithubUserId]: GithubUsername }` and then write that JSON to a file.

This is what such a flow would look like:

async function prepareGithubUsernamesData() {
  let lastEvalKey;
  let scannedAllItems = false;

  while (!scannedAllItems) {
    const { Items, LastEvaluatedKey } = await dynamodbClient.scan({ TableName: 'Users' }).promise();
    lastEvalKey = LastEvaluatedKey;

    const currentIdNameMappings = await Promise.all( (item) => {
      const githubUserId = item.GithubUserId;
      const response = await fetch(`${githubUserId}`, { method: 'GET' });
      const githubUserResponseBody = await response.json();
      const GithubUsername = githubUserResponseBody.login;

      return { [item.GithubUserId]: GithubUsername };

    currentIdNameMappings.forEach((mapping) => {
      // append the current mapping to the preparationData object
      preparationData = { ...preparationData, ...mapping };

    scannedAllItems = !lastEvalKey;

  await fs.writeFile('preparation-data.json', JSON.stringify(preparationData));

Link to GitHub

Next we scan the user records (get GithubUsername by GithubUserId for each record using Preparation Data), and move ahead to updating the record.

async function appendGithubUsername() {
  let lastEvalKey;
  let scannedAllItems = false;

  while (!scannedAllItems) {
    const { Items, LastEvaluatedKey } = await dynamodbClient.scan({ TableName: 'Users' }).promise();
    lastEvalKey = LastEvaluatedKey;

    const updatedItems = => {
      const GithubUsername = preparationData[item.GithubUserId];
      const updatedItem = currentGithubLoginItem ? { ...item, GithubUsername } : item;
      return updatedItem;

    await Promise.all( (item) => {
      return dynamodbClient.put({
        TableName: 'Users',
        Item: item,

    scannedAllItems = !lastEvalKey;

Link to GitHub

And finally, like the previous process, we wrap up by running the script on the testing environment, and then the application environments.

Dynamo Data Transform

Once we built a robust process that we could trust for data transformation, we understood that to do away with human toil and ultimately error, the best bet would be to automate it.

We realized that even if this works for us today at our smaller scale, manual processes will not grow with us. This isn’t a practical long-term solution and would eventually break as our organization scales. That is why we decided to build a tool that would help us automate and simplify this process so that data transformation would no longer be a scary and painful process in the growth and evolution of our product. 

Applying automation with open source tooling

Every data transformation is just a piece of code that helps us to perform a specific change in our database, but these scripts, eventually, must be found in your codebase.

This enables us to do a few important operations:

  • Track the changes in the database and know the history at every moment. Which helps to investigate bugs and issues.
  • No need to reinvent the wheel – reusing existing data transformation scripts already written your organization  streamlines processes.

By enabling automation for data transformation processes, you essentially make it possible for every developer to be a data transformer. While you likely should not give production access to every developer in your organization, applying changes is the last mile. When only a handful of people have access to production, this leaves them with validating the scripts and running them on production, and not having to do all of the heavy lifting of writing the scripts too. We understand it consumes more time than needed for those operations and it is not safe. 

When the scripts in your codebase and their execution are automated via CI/CD pipelines

other developers can review them, and basically, anyone can perform data transformations on all environments, alleviating bottlenecks.

Now that we understand the importance of having the scripts managed in our codebase, we want to create the best experience for every data-transforming developer.

Making every developer a data transformer

Every developer prefers to focus on their business logic – with very few context disruptions and changes. This tool can assist in keeping them focused on their business logic, and not have to start from scratch every time they need to perform data transformations to support their current tasks.  

For example – dynamo-data-transform provides the benefits of: 

  • Export utility functions that are useful for most of the data transformations
  • Managing the versioning of the data transformation scripts
  • Supporting dry runs to easily test the data transformation scripts
  • Rollback in the event the transformation goes wrong – it’s not possible to easily revert to the previous state
  • Usage via CLI––for dev friendliness and to remain within developer workflows. You can run the scripts with simple commands like `dynamodt up`, `dynamodt down` for rollback, `dynamodt history` to show which commands were executed.

Dynamo Data Transform:

Quick Installation for serverless:
The package can be used as a standalone npm package see here.

To get started with DynamoDT, first run:

npm install dynamo-data-transform --save-dev

To install the package through NPM (you can also install it via…)

Next, add the tool to your serverless.yml Run:

npx sls plugin install -n dynamo-data-transform

You also have the option of adding it manually to your serverless.yml:


  - dynamo-data-transform

You can also run the command:

sls dynamodt --help

To see all of the capabilities that DynamoDT supports.

Let’s get started with running an example with DynamoDT. We’ll start by selecting an example from the code samples in the repo, for the sake of this example, we’re going to use the example `v3_insert_users.js`, however, you are welcome to test it out using the examples you’ll find here.

We’ll initialize the data transformation folder with the relevant tables by running the command: 

npx sls dynamodt init --stage local

For serverless (it generates the folders using the resources section in the serverless.yml):

     Type: AWS::DynamoDB::Table
       TableName: UsersExample

The section above should be in serverless.yml

The data-transformations folder generated with a template script that can be found here.

We will start by replacing the code in the template file v1_script-name.js with:

const { utils } = require('dynamo-data-transform');

const TABLE_NAME = 'UsersExample';

 * The tool supply following parameters:
 * @param {DynamoDBDocumentClient} ddb - dynamo db document client
 * @param {boolean} isDryRun - true if this is a dry run
const transformUp = async ({ ddb, isDryRun }) => {
  const addFirstAndLastName = (item) => {
    // Just for the example:
    // Assume the FullName has one space between first and last name
    const [firstName, ...lastName] =' ');
    return {
      lastName: lastName.join(' '),
  return utils.transformItems(ddb, TABLE_NAME, addFirstAndLastName, isDryRun);

module.exports = {
  transformationNumber: 1,

Link to GitHub

For most of the regular data transformations, you can use the util functions from the dynamo-data-transform package. This means you don’t need to manage the versions of the data transformation scripts, the package will do this work for you. Once you’ve customized the data you’ll want to transform, you can test the script using the dry run option by running:

npx sls dynamodt up --stage local --dry

The dry run option prints the records in your console so you can immediately see the results of the script, and ensure there is no data breakage or any other issues.

Once you’re happy with the test results, you can remove the –dry flag and run it again, this time it will run the script on your production data, so make sure to validate the results and outcome.

Once you have created your data transformation files, the next logical thing you’d likely want to do is add this to your CI/CD.  To do so add the command to your workflow/ci file for production environments.

The command will run immediately after the `sls deploy` command, which is useful for serverless applications.

Finally, all of this is saved, as noted above so if you want to see the history of the data transformations, you can run:

`npx sls dynamodt history –table UserExample –stage local`

The tool also provides an interactive CLI for those who prefer to do it this way.

And all of the commands above are supported via CLI as well.

With Dynamo Data Transform, you get the added benefits of being able to version and order your data transformation operations and manage them in a single place. You also have the history of your data transformation operations if you would like to roll back an operation. And last but not least, you can reuse and review your previous data transformations.

We have open-sourced the Dynamo Data Transform tool that we built for internal use to perform data transformations on DynamoDB and serverless-based environments and manage these formerly manual processes in a safe way.

The tool can be used as a Serverless Plugin and as a standalone NPM package.

Feel free to provide feedback and contribute to the project if you find it useful.

Figure 2: Data Transformation Flow Chart 

The Challenge of Cognitive Load in Platform Engineering: a Discussion with Paula Kennedy

Key Takeaways

  • As the demands on development teams increase, we start to hit limits on the amount of information that can be processed which can have negative impacts on completing tasks effectively
  • A platform that is not designed with the user in mind will in fact increase the cognitive burden on developers utilizing it
  • When viewed through cognitive load theory, the product concept of delight can help qualify the cognitive burden the platform removes from the teams leveraging it
  • Paved paths and proven patterns help provide development teams with what good looks like simplifying the problem space and reducing extraneous cognitive load
  • With the cognitive load burden potentially being pushed down to platform teams, companies should start discussing how to best improve the platform team experience and not just focus on developer experience

In a recent article, Paula Kennedy, Chief Operating Officer at Syntasso, shared her thoughts on the ever-increasing cognitive load being saddled onto development teams. She explains that the platform engineering approach is attempting to mitigate some of this cognitive load on development teams, but this may be coming at the cost of shifting that cognitive burden onto the platform teams instead.

Cognitive Load Theory (CLT), first coined by John Sweller in 1988, views human cognition as a combination of working memory and long-term memory. Working memory has a limited capacity and consists of multiple components that are responsible for directing attention and coordinating cognitive processes. Long-term storage, on the other hand, has an endless capacity for storage and works with working memory to retrieve information as needed. 

CLT was originally designed as a means to improve classroom instruction, but there are applications to software development as well. Sweller identified three types of cognitive load: intrinsic, extraneous, and germane. Intrinsic cognitive load is the effort associated with the task at hand. In a math class, this would be adding 2+2 or attempting to reduce a polynomial equation. In pedagogical terms, this is often viewed to be immutable. 

Extraneous cognitive load is produced by demands imposed on the individual doing the task from sources external to them and the task. It can include things like distracting information, poor instructions, or information being conveyed in a way that is not ideal. An example would be verbally providing someone the steps to configure a load balancer versus providing the steps in a written document. 

Germane cognitive load, as described in an article from Psychologist World, is: 

Produced by the construction of schemas and is considered to be desirable, as it assists in learning new skills and other information.

In cognitive science, a schema is a mental construct of preconceived ideas like a framework representing aspects of the world. While intrinsic load is viewed as immutable, it is desirable to minimize extraneous cognitive load and attempt to maximize germane cognitive load.

The focus on developer experience within platform engineering is a response to the heavy cognitive load placed on teams responsible for the full product lifecycle of development. Nigel Kersten, Field CTO at Puppet, explains that organizations, especially large enterprises, that implement fully autonomous DevOps teams may be optimized locally but at the expense of other teams:

That may do a good job at locally optimizing for that particular value stream team or that application. It doesn’t optimize for the whole organization, you are creating cognitive load for your auditors, for your IT asset management folks, for all your governance issues around cost control and security. How do you switch from one team to another? All of these things become really, really complicated.

As the demands on development teams increase, we start to hit limits on the amount of information that can be processed. Heavy cognitive load can have negative impacts on the ability to complete a task effectively. As Kennedy notes:

This is an on-going struggle for anyone trying to navigate a complex technical landscape. New tools are released every day and keeping up with new features, evaluating tools, selecting the right ones for the job, let alone understanding how these tools interact with each other and how they might fit into your tech stack is an overwhelming activity. 

These activities are extraneous to the critical tasks assigned to stream-aligned teams and therefore interfere with achieving fast flow on the essential priorities of the business. For stream-aligned development teams, shipping business value is the task that should have the bulk of the team’s time and energy. Tasks like keeping up with new CI/CD tooling or the latest security threats are not, for most companies, directly critical to the product being sold. 

Evan Bottcher, Head of Architecture at MYOB, describes a platform as: 

A digital platform is a foundation of self-service APIs, tools, services, knowledge and support which are arranged as a compelling internal product. Autonomous delivery teams can make use of the platform to deliver product features at a higher pace, with reduced coordination.

This is why the developer experience is so critical to a well-engineered platform. A platform that introduces extra extraneous load or is not a schema that promotes healthy germane cognitive load will in fact increase the cognitive burden on developers utilizing it. A platform that is not built with the end-user in mind, that does not appreciate their user journey, will not succeed in improving their delivery.

As Cristóbal García García, senior staff engineer at Amenitiz, and Chris Ford, head of technology at Thoughtworks, note:

You must never forget that you are building products designed to delight their customers – your product development teams. Anything that prevents developers from smoothly using your platform, whether a flaw in API usability or a gap in documentation, is a threat to the successful realisation of the business value of the platform.

With this lens of cognitive load theory, delight becomes a means of qualifying the cognitive burden the platform is removing from the development teams and their work to accomplish their tasks. The main focus of the platform team, as described by Kennedy, is “on providing “developer delight” whilst avoiding technical bloat and not falling into the trap of building a platform that doesn’t meet developer needs and is not adopted.”

She continues by noting the importance of paved paths, also known as Golden Paths:

By offering Golden Paths to developers, platform teams can encourage them to use the services and tools that are preferred by the business. This helps to streamline the number of tools offered, reduce the cognitive load of too many options, as well as reduce technical bloat of the platform.

Applying cognitive load theory, paved paths are a means of increasing germane cognitive load by providing a schema for development teams to better understand the problem space they are working in.

In a recent tweet, Matthew Skelton, co-author of Team Topologies, echoed this sentiment and the value that good patterns to follow can bring:

What if the most important part of “platform engineering” is maintaining a high quality wiki with proven, empathic patterns for Stream-aligned teams to follow? 

Within pedagogy, there are numerous studies that show the benefits of providing worked examples to improve learning. As noted by Dan Williams, 

These steps provide learners with direction and support to create mental models of how to tackle a problem/task, or what ‘good’ looks like. Discovery or problem-based learning, on the other hand, can be burdensome to working memory due to learners having insufficient prior knowledge to draw upon to support their learning.

Paved paths and proven patterns help provide development teams with what good looks like from a holistic problem space encompassing areas like compliance and governance. As most development work mimics problem-based learning, the cognitive load on development teams is already very high. Paved paths, with associated platform tooling, simplify the problem space and reduce extraneous cognitive load. 

Kennedy does worry that in tasking teams with building this platform, we are not just spreading the cognitive load around evenly but instead pushing it down onto the platform teams. She notes that

These teams have become responsible for providing the developer experience, but with many tools that need to be incorporated, as well as other concerns such as compliance and governance, they face huge cognitive load. This is typically a team that is underinvested in and yet they are responsible for providing the platform that is supporting the delivery of customer value.

Kennedy wonders if, in addition to the current focus on developer experience, we should also be talking about and improving platform engineer experience. 

InfoQ sat down with Kennedy to discuss the articlein more detail.

InfoQ: By shifting the burden to the platform engineers you state that we are in fact pushing the cognitive load down as opposed to spreading it out evenly. This feels like we may be creating a new problem in attempting to solve our current predicament on application engineers. How do you propose we ensure we are not overloading our platform teams?

Paula Kennedy: There has been a lot of buzz and focus over the last few years on improving Developer Experience or “DevEx” to make it easier for developers to deliver and to reduce their cognitive load. Unfortunately, this cognitive load has not gone away and often is just shifted to others to manage, such as the platform engineers. 

In order to help ease this burden, I would love to see the emergence of people talking about the Platform Team Experience and how we can improve it. Whether Platform Teams are managing cloud providers, running an off-the-shelf PaaS, or have built their own platform on top of Kubernetes, I think there need to be more resources and tools that we can provide to these Platform Teams to make it easier for them to curate the platform that enables their organisation. With the increased discussion on the topic of platform engineering, I’m very excited to see what patterns and tools emerge to help solve this challenge. 

InfoQ: You note that the platform team is “responsible for providing the platform that is supporting the delivery of customer value” yet is typically underinvested in. Why do you think this is? What can platform teams do to correct this?

Kennedy: In my experience, Platform Engineering has often evolved organically instead of a Platform Team being formed deliberately, and we’ve seen this occur in at least three ways. 

Firstly, where organisations have embraced the culture of DevOps and enabled teams to autonomously deliver their software to production, these DevOps teams end up managing platform-level concerns without any additional resources. 

Secondly, some organisations consider any internal platform to be the responsibility of the infrastructure team, viewed as just another infrastructure cost to be minimised. 

Lastly, I’ve seen organisations bring in a vendor-supported Platform-as-a-Service expecting this to solve all internal platform challenges and require minimal maintenance as everything is ready “out of the box” – which is not the reality. 

In all of these cases, there is no understanding of the skills and resources needed to manage the internal platform, leading to underinvestment.

Thankfully we are seeing more resources and experiences being shared on the benefits that Platform Engineering and Platform Teams can bring. Team Topologies is a book that I recommend frequently as it provides a vocabulary to describe how the flow of value across an organisation and into the hands of end customers can be enabled through having clear team responsibilities and reduced friction across teams. 

In Team Topologies the authors advocate for having a Platform Team supporting multiple Stream Aligned Teams, and these teams should collaborate to understand each others needs and build empathy, whilst driving towards an X-As-A-Service model, where the Platform Team ensures that the Stream Aligned Team can self-serve the tools and services that they need. With more examples of the value that Platform Teams bring to the software delivery lifecycle being shared publicly, companies are starting to recognise the importance of investing in this team to ensure they have the right skills and tools to support the wider organisation.

Platform Teams can also take steps to demonstrate their value internally by considering metrics or Key Performance Indicators (KPIs) that are important to their organisation and how their work contributes to improving these. This could look like running a value stream mapping exercise, identifying where there is waste or duplication, and demonstrating how the Platform Team can offer a centralised service to improve this. 

If regulatory compliance is a critical concern for an organisation, the Platform Team could drive close collaboration with compliance teams and application teams to create friction-less paths to production with compliance steps built in, ensuring that software is delivered faster and compliantly. Internal metrics or KPIs are heavily context-specific, but by aiming to measure what is important to the business, a Platform Team can demonstrate value as well as improvements over time.

InfoQ: You mention that DevOps, in attempting to correct issues arising from the siloing of Dev and Ops, has led to developers being saddled with more cognitive load. The Platform Engineering paradigm is attempting to deal with this by pulling part of that load onto a self-service platform that handles much of the burden of getting code into production and supporting it. Are we not at risk of just re-introducing the silos that DevOps was attempting to break down?

Kennedy: The term “DevOps” means different things to different people and even though it’s a term that has been around for more than a decade, it still often causes confusion. Personally, I like to refer to the definition from Patrick Debois, back in 2010:

The Devops movement is built around a group of people who believe that the application of a combination of appropriate technology and attitude can revolutionise the world of software development and delivery.

At its core, the DevOps movement is absolutely about reducing silos, increasing communication and empathy between teams, as well as improving automation, and striving for continuous delivery. For me, having an internal Platform Team is just a natural evolution of DevOps, especially DevOps at scale. 

Members of the Platform Team are responsible for their internal platform product, and they need to both develop this platform to meet the needs of their users (the application teams) as well as operate the platform day to day. 

For the software developers focused on delivering features to end customers, they are responsible for developing their software as well as operating it using the self-service tools supported by the platform team. In this model, everyone is doing “DevOps’ i.e. everyone is developing and operating their own software. But for this to really work well and avoid the potential for more silos, there is a significant cultural component to be considered and this is the mindset shift for the Platform Team to treat their platform as a product.

I’ve talked about this a lot over the last few years but depending on how the Platform Team has evolved within an organisation, this can present a huge challenge. When treating the internal platform as a product, this includes understanding user needs (the application teams), delivering in small batches to seek feedback, providing a high-quality user experience to delight developers, internal marketing and advocacy of the platform, and much more. Where platform teams embrace this mindset, we’ve seen significant benefits, as evidenced by industry research such as the Puppet State of DevOps Report 2021: “Not every platform team is automatically successful, but the successful ones treat their platform as a product.”

Create Your Distributed Database on Kubernetes with Existing Monolithic Databases

Key Takeaways

  • Cloud-native is becoming the trend, and for databases, Kubernetes is the environment provider, and now Apache ShardingSphere provides a solution for putting a monolithic database on Kubernetes
  • Apache ShardingSphere can transform any database to a distributed database system, while enhancing it with functions such as sharding, elastic scaling, encryption features, etc.
  • Users may deploy and manage ShardingSphere clusters and create their distributed database system on Kubernetes using these tools, regardless of the location of databases.
  • The advantages of running ShardingSphere on Kubernetes include: leveraging existing database capacity, efficient and steady migration, providing more cloud-native running and governance and traditional SQL-like approach, flexible auto-scaling feature and more necessary features, more clients to choose from, open-source support.
  • The example case given in the article demonstrates how to deploy ShardingSphere-Operator, create a sharding table using DistSQL, and test the Scaling and HA of the ShardingSphere-Proxy cluster


Most of the recent convenience upgrades that have blessed peoples’ lives in the 21st century can be traced back to the widespread adoption of the Internet.

Constant connectivity at our fingertips improved our lives, and created new technical infrastructure requirements to support high-performance Internet services. Developers and DevOps teams have become focused on ensuring the backend infrastructure’s availability, consistency, scalability, resilience, and fully automated management.

Examples of issues that tech teams are constantly struggling with include managing and storing large amounts of business data and creating the conditions to ensure that infrastructures deliver optimal service to the applications. Also, designing technical architecture while thinking ahead to meet future requirements and evolving modern applications to be able to “live” in the cloud.

The cloud is game-changing technology, and if you haven’t yet, you should get familiar with it. It has already transformed infrastructure as we know it, from development to delivery, deployment, and maintenance. Nowadays, modern applications are embracing the concept of anything-as-a-service from various cloud vendors, and developer and operations teams are considering upgrading legacy workloads to future cloud-native applications.

Microservices on Kubernetes

To address the challenges mentioned above, we are witnessing an evolution of the application layer from monolithic services to microservices. By dividing a single monolithic service into smaller units, modern applications can become independent of one another while eliminating unwanted effects of development, deployment, and upgrading.

Moreover, to decouple and simplify communication services, such as APIs and calls, service mesh appeared and took over. Kubernetes provides an abstract platform and mechanism for this evolution, explaining its popularity.

If I had to pinpoint the reason why Kubernetes is so popular, I’d say that it’s because, according to the Kubernetes docs:

Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system. (From “Why you need Kubernetes and what it can do” section.)

Kubernetes is an ideal platform for managing the microservice’s lifecycle, but what about the database, a stateful service?


The application layer has adopted microservices as the solution to address the issues previously introduced here. Still, when it comes to the database layer, the situation is a little different.

To answer the pain points we raised, we can look at the database layer. It uses a different method, yet somewhat similar: sharding, a.k.a. distributed architecture.

Currently, this distributed architecture is ubiquitous, whether we’re talking about NoSQL databases, such as MongoDB, Cassandra, Hbase, DynamoDB, or NewSQL databases, such as CockroachDB, Google Spanner, Aurora, and so forth. Distributed databases require splitting the monolithic one into smaller units, or shards, for higher performance, improved capability, elastic scalability, etc.

One thing all of these database vendors have in common is that they all must consider historical migration to streamline this evolution process. They all provide data migration from existing Oracle, MySQL, PostgreSQL, and SQLServer databases, just to name a few, to their new database offerings. That’s why CockroachDB is compatible with the PostgreSQL protocol, Vitess provides a sharding feature for MySQL, or AWS has Aurora-MySQL and Aurora-PostgreSQL.

Database on Cloud and Kubernetes

The advent of the cloud represents the next challenge for databases. Cloud platforms that are “go-on-demand,” “everything-as-a-service,” or “out-of-box” are currently changing the tech world.

Consider an application developer. To stay on pace with the current trends, the developer adheres to the cloud-native concept and prefers to deliver the applications on the cloud or Kubernetes. Does this mean it is time for databases to be on the cloud or Kubernetes? The majority of readers would probably answer with a resounding yes – which explains why the market share of the Database-as-a-service (DBaaS) is steadily increasing.

Nevertheless, if you’re from the buy side for these services, you may wonder which vendor can promise you indefinite support. The truth is that nobody can give a definitive answer, so multi-cloud comes to mind, and databases on Kubernetes seem to have the potential to deliver on this front.

This is because Kubernetes is essentially an abstraction layer for container orchestration and is highly configurable and extensible, allowing users to do custom coding for their specific scenarios. Volumes on Kubernetes, for example, are implemented and provided by many cloud vendors. If services are deployed on Kubernetes, applications will be able to interact with Kubernetes rather than different types of specific cloud services or infrastructure. This philosophy has already proven to work well in the case of stateless applications or microservices. As a result of these successful cases, people are thinking about how to put databases on Kubernetes to become cloud neutral.

A drawback to this solution is that it is more difficult to manage than the application layer, as Kubernetes is designed for stateless applications rather than databases and stateful applications. Many attempts to leverage Kubernetes’ fundamental mechanisms, such as StatefulSet and Persistent Volume, overlay their custom coding to address the database challenge on Kubernetes. This approach can be seen in operators of MongoDB, CockroachDB, PostgreSQL, and other databases.

Database Compute-Storage Architecture

This approach has become common, but is it the only one? My answer is no, and the following content will introduce you to and demonstrate another method for converting your existing monolithic database into a distributed database system running on Kubernetes in a more cloud-native pattern.

With the help of the following illustration, let’s first consider why this is possible.

As you can see from the illustration, the database has two capabilities: computing and storage.

MySQL, PostgreSQL, and other single-node databases combine or deploy two components on a single server or container.

Apache ShardingSphere

Apache ShardingSphere is the ecosystem to transform any database into a distributed database system and enhance it with sharding, elastic scaling, encryption features, and more. It provides two clients, ShardingSphere-Proxy and ShardingSphere-Driver.

ShardingSphere-Proxy is a transparent database proxy that acts as a MySQL or PostgreSQL database server while supporting sharding databases, traffic governance (e.g., read/write splitting), automatically encrypting data, SQL auditing, and so on. All of its features are designed as plugins, allowing users to leverage DistSQL (Distributed SQL) or a YAML configuration to select and enable only their desired features.

ShardingSphere-JDBC is a lightweight Java framework that brings additional features to Java’s JDBC layer. This driver shares most of the same features with ShardingSphere-Proxy.

As I’ve introduced earlier, if we view monolithic databases as shards (aka storage nodes) and ShardingSphere-Proxy or ShardingSphere-JDBC as the global server (aka computing node), then ultimately, the result is a distributed database system. It can be graphically represented as follows:

Because ShardingSphere-Proxy acts as a MySQL or PostgreSQL server, there is no need to change the connection method to your legacy databases while ShardingSphere-JDBC implements the JDBC standard interface. This significantly minimizes the learning curve and migration costs.

Furthermore, ShardingSphere provides DistSQL, a SQL-style language for managing your sharding database and dynamically controlling these distributed database system’s workloads, such as SQL audit, read/writing splitting, authority, and so on.

For example, you may use `CREATE TABLE t_order ()` SQL to create a new table in MySQL. With ShardingSphere-Proxy, `CREATE SHARDING TABLE RULE t-order ()` will help you create a sharding table in your newly upgraded distributed database system.


So far, we’ve solved the sharding problem, but how do we make it work on Kubernetes? ShardingSphere-on-cloud provides ShardingSphere-Operator-Chart and ShardingSphere-Chart to help users deploy ShardingSphere-Proxy and ShardingSphere-Operator clusters on Kubernetes.

ShardingSphere-Chart and ShardingSphere-Operator-Chart

Two Charts help users deploy the ShardingSphere-Proxy cluster, including proxies, governance center, and Database connection driver, and ShardingSphere-Operator using helm commands.


ShardingSphere-Operator is a predefined CustomResourceDefinition that describes ShardingSphere-Proxy Deployment on Kubernetes. Currently, this operator provides HPA (Horizontal Pod Autoscaler) based on CPU metric and ensures ShardingSphere-Proxy high availability to maintain the desired replica number. Thanks to community feedback, throughout development iterations, we’ve found out that autoscaling and availability are our users’ foremost concerns. In the future, the open-source community will release even more useful features.

New solution

Users can easily deploy and manage ShardingSphere clusters and create their distributed database system on Kubernetes using these tools, regardless of where their monolithic databases reside.

As previously stated, a database is made up of computing nodes and storage nodes. A distributed database will divide and distribute these nodes. As a result, you can use your existing databases as the new distributed database system’s storage nodes. The highlight of this solution is adopting a flexible computing-storage-splitting architecture, utilizing Kubernetes to manage stateless computing nodes, allowing your database to reside anywhere and drastically reducing upgrading costs.

ShardingSphere-Proxy will act as global computing nodes to handle user requests, obtain local resultSet from the sharded storage nodes, and compute the final resultSet for users. This means there is no need to do dangerous manipulation work on your database clusters. You only have to import ShardingSphere into your database infrastructure layer and combine databases and ShardingSphere to make it a distributed database system.

ShardingSphere-Proxy is a stateless application that is best suited to being managed on Kubernetes. As a stateful application, your databases can run on Kubernetes, any cloud, or on-premise.

On the other hand, ShardingSphere-Operator serves as a manual operator working on Kubernetes to offer availability and auto-scaling features for the ShardingSphere-Proxy cluster. Users can scale-in or scale-out ShardingSphere-Proxy (computing nodes) and Databases (storage nodes) as needed. For example, some users simply want more computing power, and ShardingSphere-Operator will automatically scale out ShardingSphere-Proxy in seconds. Others may discover that they require more storage capacity; in this case, they simply need to spin up more empty database instances and execute a DistSQL command. ShardingSphere-Proxy will reshard the data across these old and new databases to improve capacity and performance.

Finally, ShardingSphere can assist users in resolving the issue of smoothly sharding existing database clusters and taking them into Kubernetes in a more native manner. Instead of focusing on how to fundamentally break the current database infrastructure and seeking a new and suitable distributed database that can be managed efficiently on Kubernetes as a stateful application, why don’t we consider this issue from the other side. How can we make this distributed database system more stateless and leverage the existing database clusters? Let me show you two examples of real-world scenarios.

Databases on Kubernetes

Consider that you have already deployed databases, such as MySQL and PostgreSQL, to Kubernetes using Helm charts or other methods and that you are now only using ShardingSphere charts to deploy ShardingSphere-Proxy and ShardingSphere-Operator clusters.

Once the computing nodes have been deployed, we connect to ShardingSphere-Proxy in the original way to use DistSQL to make Proxy aware of databases. Finally, the distributed computing nodes connect the storage nodes to form the final distributed database solution.

Databases on cloud or on-premise

If you have databases on the cloud or on-premises, the deployment architecture will be as shown in the image below. The computing nodes, ShardingSphere-Operator and ShardingSphere-Proxy, are running on Kubernetes, but your databases, the storage nodes, are located outside of Kubernetes.

Pros and Cons

We’ve seen a high-level introduction to ShardingSphere and some real-world examples of deployment. Let me summarize its pros and cons based on these real-world cases and the previous solution introduction to help you decide whether to adopt it based on your particular case.


Instead of blowing up all your legacy database architecture, it’s a smooth and safe way to own a distributed database system.

With almost no downtime, ShardingSphere offers a migration process that allows you to move and shard your databases simultaneously.

ShardingSphere’s DistSQL enables you to use the distributed database system’s features, such as sharding, data encryption, traffic governance, and so on, in a database native manner, i.e., SQL.

You can scale-in or scale-out ShardingSphere-Proxy and Databases separately and flexibly depending on your needs, thanks to a non-aggressive computing-storage splitting architecture.

ShardingSphere-Proxy is much easier to manage and natively deploy on Kubernetes because it is essentially a type of stateless global computing server that also acts as a database server.

As stateful storage nodes, databases can reside on Kubernetes or on any cloud to avoid a single cloud platform lock-in. With ShardingSphere to connect your nodes, you will get a distributed database system.

ShardingSphere is a database ecosystem that provides data encryption, authentication, read/write splitting, SQL auditing, and other useful features. Users gradually discover their advantages, regardless of sharding.

ShardingSphere offers two clients based on user requirements: ShardingSphere-Proxy and ShardingSphere-JDBC. Generally, ShardingSphere-JDBC has better performance than ShardingSphere-Proxy, whereas ShardingSphere-Proxy supports all development languages and Database management capabilities. A hybrid architecture with ShardingSphere-JDBC and ShardingSphere-Proxy is also a good way to reconcile their capabilities.

Apache ShardingSphere is one of the Apache Foundation’s Top-Level projects. It has been open-sourced for over 5 years. As a mature community, it is a high-quality project with many user cases, detailed documentation, and strong community support.


Even in a distributed database system, the transaction is critical. However, because this tech architecture was not developed from the storage layer, it currently relies on the XA protocol to coordinate the transaction handling of various data sources. It is not, however, a perfect and comprehensive distributed transaction solution.

Some SQL queries work well in a storage node (database) but not in this new distributed system. This is a difficult issue to achieve 100% support, but thanks to the open-source community, we’re getting close.

Although ShardingSphere defines itself as a computing database server, many users prefer to think of it and their databases as a distributed database. As a result, people must think about obtaining a consistent global backup of this distributed database system. ShardingSphere is working on such a feature, but it is not yet supported (release 5.2.1). Users may require manual or RDS backups of these databases.

Each request will be received by ShardingSphere, calculated, and forwarded to the storage nodes. It is unavoidable that the overhead for each query will increase. This mechanism happens in any distributed database compared to a monolithic one.


This section demonstrates how to use ShardingSphere and PostgreSQL RDS to build a distributed PostgreSQL database that will allow users to shard data across two PostgreSQL instances.

For this demonstration, ShardingSphere-Proxy runs on Kubernetes, and PostgreSQL RDS runs on AWS. The deployment architecture is depicted in the following figure.

This demo will include the following major sections:

  1. Deploy the ShardingSphere-Proxy cluster and ShardingSphere-Operator.
  2. Create a distributed database and table using Distributed SQL.
  3. Test the Scaling and HA of the ShardingSphere-Proxy cluster (computing nodes).

Prepare database RDS

We need to create two PostgreSQL RDS instances on AWS or any other cloud. They will act as storage nodes.

Deploy ShardingSphere-Operator

  1. Download the repo, and create a namespace named `sharding-test` on Kubernetes.

git clone
kubectl create ns sharding-test
cd charts/shardingsphere-operator
helm dependency build
cd ../
helm install shardingsphere-operator shardingsphere-operator -n sharding-test
cd shardingsphere-operator-cluster
vim values.yaml
helm dependency build
cd ..
helm install shardingsphere-cluster shardingsphere-operator-cluster -n sharding-test
  1. Change `automaticScaling: true` and `proxy-frontend-database-protocol-type: PostgreSQL` in values.yaml of `shardingsphere-operator-cluster` and deploy it.

3. Following these operations, you will create a ShardingSphere-Proxy cluster containing 1 Proxy instance, 2 Operator instances, and 1 Proxy governance instance showing as follows.

Create a sharding table by using Distributed SQL

  1. Login to ShardingSphere Proxy and add PostgreSQL instances to Proxy.

kubectl port-forward --namespace sharding-test svc/shardingsphere-cluster-shardingsphere-operator-cluster 3307:3307
psql --host -U root -p 3307 -d postgres
kubectl port-forward --namespace sharding-test svc/shardingsphere-cluster-shardingsphere-operator-cluster 3307:3307
psql --host -U root -p 3307 -d postgres
  1. Execute DistSQL to create a sharding table `t_user` with MOD (user_id, 4), and show the actual tables of this logic table `t_user`.

  1. Insert some test rows and do a query on ShardingSphere-Proxy to get the merged final result.

  1. Login to two PostgreSQL instances to get their local results.

This simple test will help you understand that ShardingSphere can help you manage and shard your databases. People don’t need to care about the separate data in different shards.

Test the Scaling and HA of the ShardingSphere-Proxy cluster (computing nodes)

If you discover that the TPS (transactions per second) or QPS (queries per second) of this new system are extremely high and users complain that it takes too long to open a webpage, it’s time to upgrade your database system’s computing power.

Compared to other distributed database systems, ShardingSphere-Proxy is the simplest way to increase computing nodes. ShardingSphere-Operator can ensure ShardingSphere-Proxy availability and autoscale them based on CPU metrics. Furthermore, by modifying its specifications, it is possible to make it scale-in or scale-out, just as follows:

You will receive two ShardingSphere-Proxy instances after upgrading the release. This implies that you have more computing power.

If, as mentioned above, you require more storage capacity, you can take the following steps.

  1. Launch additional PostgreSQL instances in the cloud or on-premises.
  2. Add these new storage nodes to the ShardingSphere-Proxy.
  3. Run distributed SQL to allow ShardingSphere to assist you with resharding.


The focus of this article is a new sharding database architecture on Kubernetes that leverages your existing monolithic databases, allowing the DevOps team to evolve their database infrastructure to a modern one efficiently and fluently.

The database computing-storage split is a vintage architecture that is re-interpreted and fully leveraged on Kubernetes today to help users address the governance issue of the stateful database on Kubernetes.

These days, distributed databases, cloud computing, open source, big data, and modern digital transformation are common buzzwords. But they represent useful new concepts, ideas, and solutions that address production concerns and needs. As I always recommend to our end-users, look forward to welcoming new ideas, learning their pros and cons, and then choosing the best one for your specific situation, as there is no such thing as a perfect solution.


Bringing a Product Mindset into DevOps

Key Takeaways

  • Delivery pipelines are the “mechanism” through which organisations turn ideas into valuable products in the hands of “users”
  • At the most basic level, pipelines are enablers of value delivery; at their best, they can be a source of competitive advantage
  • While there are patterns and best practices, no single pipeline design will satisfy all organisations, nor will a delivery pipeline remain static over time.
  • Consequently, we need to treat delivery pipelines as a product or service; that is, design, implement and deliver them with an eye on what the organisational goals, user and stakeholder needs are currently (and over time).
  • Adding a product mindset to DevOps capabilities is a key ingredient to finding this balance between desirability, viability and feasibility. This strengthens DevOps capabilities and the pipelines they deliver, and ultimately, an organisation’s ability to deliver value. 


To be successful, organisations need two things: products and services their customers find valuable and the ability to deliver these products and services well…

In this article I will demonstrate why – consequently – we must design, implement and operate our delivery pipelines (the means of turning ideas into products in the hands of users) as we would any other product or service: by adding a “product mindset’.

I will approach this in three parts: first, what I mean by “pipelines (and DevOps)”, second, why we should treat pipelines as a product, and third, what a product mindset is, and how in practice, product management can help and be added to DevOps.

What is a (delivery) “pipeline”?

I see delivery pipelines as the tools, techniques and practices through which we turn ideas into products and services which users can then use and organisations can operate, support and evolve. (DevOps, for the purpose of this article, is the discipline that designs, builds and operates pipelines).

I want to take a broad perspective for the end-to-end of a pipeline: the full value chain starting with identifying problems and opportunities, setting strategic goals, defining streams, to solution design, analysis, implementation and quality assurance, to compliance, operations and customer support, and of course, product use.

The traditional “inner” and more holistic “outer” cycle of a pipeline covering the full value chain.
Icons by Flaticon: Elias Bikbulatov

Why do pipelines matter?

Many organisations I work with believe that pipelines are “technical stuff” that sweaty engineers look after somewhere in the basement, but that non-techies, certainly not management, don’t have to worry about…

This could not be further from the truth, because pipelines matter at business level, for three reasons:

Pipelines are enablers

At the basic level, a pipeline is an enabler to turn ideas into products in the hands of users (and subsequently be operate and manage them).

Unfortunately, the pipelines of many organisations are disjointed (there are breaks in the process like stagegates or manual handovers), inefficient (manual testing, manual resource provisioning, limited self-provisioning), or they meet the wrong requirements (over-designed in parts while having gaps leading to bottlenecks in other areas, e.g. the idea that hard to configure GUI driven tools are better than command line).

Surprisingly, this is frequently seen as “the way it is” and organisations and teams largely accept the resulting lengthy and clunky process that results in:

  • Fewer features in the hands of users
  • At a lower quality
  • With slower organisational learning
  • And overall increased pain to deliver and operate (and consequently less motivated teams)

So if for no other reason than good processes, efficiency and effectiveness, you really will want a slick pipeline.

One size does not fit all

I have worked with early-day startups with the key goal of going to market fast, attract users and learn, using lightweight tooling like Vercel or Heroku to keep DevOps cost and effort to a minimum, all while being able to deploy directly to production many times a day, controlling feature availability via feature flagging tools such as LaunchDarkly. 

Financial (but also many other) organisations I work with tend to have a pipeline of more or less separated environments including dev, integration, UAT, staging and production sometimes on prem, frequently with quite heavily formalised and often manual deployment procedures. Ultimately, how these processes are defined depends on each organisation’s culture, stance to risk and quality.

I have also worked with a big government department deploying continuously 100+ times a day across numerous environments, all fully automated and with highest degrees of self-serve capabilities for engineers to provision resources and run tests.

My team also worked with a medical services company that in the not-so-recent past used to burn code and (manually created) release notes for their quarterly public releases onto DVDs to satisfy regulatory requirements.

The point is this: while there are best practice patterns and paradigms for pipelines (such as continuous integration, high degrees of automation, and enablement through self serve), there isn’t a one-size-fits-all off-the-shelf pipeline to fit all organisations, nor one that would fit the same organisation over time; organisations have different needs and demands in terms of how to handle ideas and requirements, create, deploy and test code, how to quality assure, how to report, how to run, operate, and audit, and their needs will change as do their strategy and the environment (new strategic goals, new customer expectations, new regulatory requirements, new technologies).

So we need to tailor our pipeline to what our needs are currently, and allow for evolution so they can become what they need to be in the future. 

Pipelines are strategic assets and can be a source of competitive advantage

If we consider the intrinsic role of an organisations’ pipeline(s) as “enabler” of value delivery and that their design is contextual, then not only should we treat them as corporate asset, but consider them as a source of competitive advantage.

Over three years my team helped the medical services company I mentioned above to streamline the regulatory process: allowing compliance to raise risks against epics and stories, link them to features, codebase, test cases and test results (in their backlog management tool) and automatically spit out release notes covering and linking all these aspects in traceable and auditable manner. This reduced effort of around 20 man days per release to literally nothing, and it increased the quality of the release documentation.

By deploying directly to production, the startup, acting in a highly competitive space with fast moving innovation, had an edge over other companies by being leaner (fewer people) and being faster to market (more value for customers, at a lower cost and faster learning cycles).

So if we get our pipeline right, it is not just another corporate enabler, but can become a source of competitive advantage.

Pipelines matter, so treat them like a product

If pipelines matter, and we can’t just get one off the shelf, we need to treat them with consideration and care, knowing what the right thing to build now and later is, and give guidance to our (DevOps) team… So in other words, we need to treat them like a product, and add a product mindset.

Consider this: we would not ask an engineering team to “just build us” an ecommerce website or a payment gateway, or a social media app. We would support them by defining goals, research what is valuable to our users and provide this as context and guidance to our designers and engineers…

What is a product mindset?

A product mindset is about delivering things that provide value to our users, within the context of the organisation and their strategy, and do so sustainability (i.e. balancing the now and the future). 

For the purpose of this article, I will use product thinking, product mindset and product management very much interchangeably.

Creating product-market-fit by balancing desirability, viability and feasibility as the job of product management

In practice this means achieving product-market-fit by balancing what our users need, want and find valuable (desirability), what we need to achieve (and can afford) as an organisation (viability) and what is possible technically, culturally, legally, etc (feasibility), and doing this without falling into the trap of premature optimisation or closing options too early.

To give a tiny, very specific, but quite telling example: for the medical device organisation we chose Bash as scripting language because the DevOps lead was comfortable with it. Eventually we realised that the client’s engineers had no Bash experience, but as a .Net shop were far more comfortable with Python. Adding a user-centric approach which is part of a product mindset at an early stage would have prevented this mistake and the resulting rework.

How do you “product manage” a pipeline?

Ultimately you just “add product” which is a flippant way to say you do the same thing as we would with any other product or service.

For a startup I worked with, this meant that the lead engineer “just put a product hat on” and looked at the pipeline through the lens of early business goals: use an MVP to gauge product market fit with a small, friendly and highly controlled small group of prospects. Consequently, he recommended to opt for speed, e.g. directly deploy to production, feature flags to manage feature availability, AWS Lambdas and AWS Cognito. We would then monitor business development and scale / build more custom features (e.g. authentication) as and when required (rather than build for a future that might never come).

The insurance company from our earlier examples had asked us to help them build a platform to support 100+ microservices and cloud agnosticity (to ensure business continuity). As this was a complex environment, we added a dedicated product owner to support a team of DevOps engineers. First she facilitated a number of workshops with the product and engineering teams to understand how they currently worked and what was in place. It quickly became apparent that the organisation was missing milestones promised to their clients, because engineers could not release code efficiently (due to manual steps and resource constraints when moving code between environments and provisioning resources). It also became apparent that the organisation would only have three microservices for the next 12 months and that cloud agnosticity was a long-term aspiration, not a must-have requirement at this point.

Digging into what “value” really meant for the organisation, everyone agreed that right now the teams needed to build and release quality features and hit the milestones promised to customers. Consequently, the product owner reprioritised with the team, creating a roadmap that would focus on removing the blockers resulting from only two engineers being allowed to manually deploy code to staging and production, then empower engineers through basic self-serve (self-provisioning of new microservices and other resources based on standardised templates). Initially this would be focused on one cloud provider, but with future cloud-agnosticity in mind. Pointing out that there were only three microservices at this point in time, it was also agreed to address building a microservice mesh at a later stage, as and when complexity required this…

Tools to product manage a pipeline

Generally speaking, the tools and techniques to “product manage” a pipeline are the same as those for “normal” product management. The following “framework” is a good starting point:

1. Establish context

Start with setting the scene. Understand the context in which the pipeline will operate. Define and align on:

  • What near, mid and long-term goals the delivery pipeline needs to support
  • What key opportunities there are, what problems can be solved
  • What the key constraints are, and what is possible

Remember the medical services example: the initial brief was to containerise existing applications and move them into the cloud. While this was necessary, during our analysis we found that this alone wouldn’t give the organisation the expected benefits of increased throughput, but that this could only be achieved by streamlining the regulatory approval process. 

Modelling of any existing process is highly useful at this stage, especially with a view on bottlenecks and missed opportunities.

2. Identify potential users

As a second step, you will want to understand who will be using the pipeline, benefiting from it, and be impacted by it. And you will want to take a broad view here.

You’ll have your usual suspects like engineers, QAs, DevOps engineers, but I suggest you expand to cover a wider audience including product people, sales and marketing, specialist stakeholders, such as in the case of the medical software example compliance and regulatory bodies. A stakeholder map or onion is my preferred model for this, but a simple list might just do fine.

Example stakeholder onion for the medial services organisations, focusing in on regulatory compliance stakeholders.

3. Identify users’ jobs, needs, gains and pains

In a subsequent step, you will want to understand what jobs these users need to accomplish, their needs, related gains and pains, their expectations and requirements. The value proposition canvas or a similar model, or user personas, work well here. In a subsequent step we can use these tools to also start identifying potential solutions for each of these “requirements”.

Note that you may not know where to start, but you also will not want to over-analyse. Here a service blueprint or an experience map can come in handy, as they allow us to link users, needs and pain points, thus allowing us to identify where it is worth to spend more analysis effort. Experience maps or service blueprints are also excellent communication tools that we can even use to show progress.

Coming back to the medical services company, consider the compliance manager: they are worried about identifying risks and one of their needs is to demonstrate traceability (solution: integrate risk management and the backlog tracker), but creating release documentation is long-winded and error prone (solution: automate document generation), and they would love it if it was all directly submitted to the regulator (solution: integrate).

An adaptation of the Ideation Canvas by Futuris to identify user expectations and potential solutions (as alternative to the Value Proposition Canvas by Strategzyer).

Experience Map illustrating the process and pain points.

4. Prioritise

Finally, based on all the previous work, you’ll want to prioritise: what to do first, what to support next. A feature map is the perfect tool for this. Here it is best practice to group features into releases that address organisational and team goals over time, thus linking back to the goals identified in the very first activity, creating our product roadmap.

For our medical services company, this meant:

  1. Enable a basic end-to-end process so that teams can easily deploy code across all environments
  2. Create a live environment certified by the regulators
  3. Enable compliance documentation automation
  4. Enable strong self-serve capabilities

Example feature map indicating four “releases” and with each prioritised feature, based on the concept of the Storymap “invented” first by Jeff Patton

Build vs buy

A frequent question that arises is where to invest and innovate, where to build, which aspects to own, which to outsource, buy, or rent.

I find that Wardley Maps are a great tool to use when making these decisions, as they guide our strategic approach based on what is relevant across the value chain and where the various solution options are in terms of industry maturity, which then informs us on whether to “build or buy” and whether to enable commoditisation or strive for a defensive and whether and how to enable or prevent commoditisation. 

Illustration of Wardley Map, for example medical device company, illustrating that there is competitive advantage in innovating the regulatory compliance process

Returning to the medical services company, the Wardley Map for their delivery pipeline confirmed that a good integration server was important, but also commoditised, and that we should choose a best of breed solution- obviously. More importantly, it indicated that automation of the compliance process was a source of efficiency and competitive advantage, but that there was no existing solution, and that we should innovate in this space. The question the Wardley map subsequently posed was whether we should IP protect this process and keep it proprietary, or whether it was more beneficial to work with competitors and regulators to create an industry standard.

When’s it done?

The above activities are especially useful in the early stages of working on a pipeline, for instance during an inception. This inception toolkit provides a pattern and templates which my teams use to set up initiatives. However, as with any product development, you are never done; product management is a continuous activity, not a one-off.

Organisational goals will change, user expectations will evolve, technologies become outdated and available. Consequently, the pipeline has to adapt and evolve, too. Just think of an ever-changing compliance landscape, or how an organisation might find themselves in one industry in one market today, and in totally different ones tomorrow; also, how we have moved from on-prem hosting to cloud, to server-less, but also how new technologies such as big data and ML have brought up different needs in terms of infrastructure.

Where does that leave you?

Adding a product mindset is beneficial

The feedback I have had from teams and clients, as well as the measurable improvements (throughput, cycle times, quality, value delivery) clearly indicate that adding a product mindset to DevOps is not only a nice-to-have, but a must.

For DevOps engineers it makes their lives easier, it allows them to focus on the right thing, it empowers them: at the most basic level it removes noise and worry linked to not being clear on what to do, and allows them to create a slick pipeline that makes everyone’s lives easier; at its best, it allows DevOps to create a strategic asset for the organisation.

For the organisation, it makes sure we deliver value, it enables product delivery and ensures we are using funding wisely, by supporting the creation of a pipeline that allows all parts of the organisation to work towards achieving strategic goals and reducing the risk and waste that arises when teams are not sure what they should be doing, are not clear on who to listen to, or which solutions to focus on.

So where does that leave you?

We can “add product” in a very lightweight and informal way by “just keeping it in mind”, or  in a more formal way by adding a dedicated product specialist to support DevOps engineers. This means that teams have options to suit their appetite, culture and budget.

When the proposed tools and practices strike a chord, and when you feel comfortable to get your toes wet, there is no reason you couldn’t adopt them tomorrow. You don’t even have to do “everything”: any of the techniques I mention above on their own will add incremental value.

Where this is all a bit new, just grab one of your product colleagues and start involving them in your analysis and decision processes more or less loosely…

Further information and a recording of a conference talk on this topic can be found here.

CloudFormation or Terraform: Which Iac Platform is the Best Fit for You?

Key Takeaways

  • While both CloudFormation and Terraform have the concept of modules, Terraform’s is better defined
  • State storage in Terraform requires special care as the state file is needed to understand the desired state and can contain sensitive information
  • CloudFormation excels at deploying AWS infrastructure whereas Terraform is best suited for dynamic workloads residing in multiple deployment environments where you want to control additional systems beyond the cloud
  • Many organizations choose to use Terraform for databases and high-level infrastructure and CloudFormation for application deployment
  • Since Terraform uses HCL, this can create beneficial segregation between Ops and Dev as Dev may not be familiar with this language.

While both CloudFormation and Terraform are robust IaC platforms that offer efficient configuration and automation of infrastructure provisioning, there are a few key differences in the way they operate. CloudFormation is an AWS tool, making it ideal for AWS users looking for a managed service. Terraform, on the other hand, is an open-source tool created by Hashicorp which provides the full flexibility, adaptability, and community that the open-source ecosystem has to offer. These differences can be impactful depending on your specific environment, use cases, and several other key factors.  

In this post, I’ll compare CloudFormation and Terraform based on important criteria such as vendor neutrality, modularity, state management, pricing, configuration workflow, and use cases to help you decipher which one is the best fit for you.

But first, I’ll provide a bit of background on each platform and highlight the unique benefits that each of them brings to the table.

What is CloudFormation?

AWS CloudFormation is an Infrastructure as Code (IaC) service that enables AWS cloud teams to model and set up related AWS and third-party resources in a testable and reproducible format. 

The platform helps cloud teams focus on the application by abstracting away the complexities of provisioning and configuring resources. You also have access to templates to declare resources; CloudFormation then uses these templates to organize and automate the configuration of resources as well as AWS applications. It supports various services of the AWS ecosystem, making it efficient for both startups and enterprises looking to persistently scale up their infrastructure.

Key features of CloudFormation include:

  • Declarative Configuration with JSON/YAML
  • The ability to preview environment changes
  • Stack management actions for dependency management
  • Cross-regional account management

What is Terraform?

Terraform is Hashicorp’s open-source infrastructure-as-code solution. It manages computing infrastructure lifecycles using declarative, human-readable configuration files, enabling DevOps teams to version, share, and reuse resource configurations. This allows teams to conveniently commit the configuration files to version-control tools for safe and efficient collaboration across departments. 

Terraform leverages plugins, also called providers, to connect with other cloud providers, external APIs, or SaaS providers. Providers help standardize, provision, and manage the infrastructure deployment workflow by defining individual units of infrastructure as resources.

Key features of Terraform include:

  • Declarative configurations via Hashicorp Configuration Language (HCL)
  • The support of local and remote execution modes
  • Default version control integration
  • Private Registry
  • Ability to ship with a full API

Comparing CloudFormation and Terraform

Vendor Neutrality

The most well-known difference between CloudFormation and Terraform is the association with AWS. While you can access both tools for free, as an AWS product, CloudFormation is only built to support AWS services. Consequently, it is only applicable for deployments that rely on the AWS ecosystem of services. This is great for users who run exclusively on AWS as they can leverage CloudFormation as a managed service for free and at the same time, get support for new AWS services once they’re released.

In contrast, Terraform is open-source and works coherently with almost all cloud service providers like Azure, AWS, and Google Cloud Platform. As a result, organizations using Terraform can provision, deploy, and manage resources on any cloud or on-premises infrastructure, making it an ideal choice for multi-cloud or hybrid-cloud users. Furthermore, because it is open source and modular, you can create a provider to wrap any kind of API or just use an existing provider. All of these capabilities have already been implemented by various vendors which offer users far more flexibility and convenience, making it ideal for a greater variety of use cases. 

However, on the downside, Terraform often lags behind CloudFormation with regard to support for new cloud releases. As a result, Terraform users have to play catch-up when they adopt new cloud services.


Modules are purposed for reusing and sharing common configurations, which renders complex configurations simple and readable. Both CloudFormation and Terraform have module offerings, however, CloudFormation’s is newer, and therefore not as mature as Terraform’s. 

CloudFormation has always offered features to build modules using templates, and as of 2020, they now offer out-of-the-box support for modules as well. Traditionally, CloudFormation leverages nested stacks, which allow users to import and export commonly used configuration settings. Over the past few years, however, CloudFormation launched both public and private registries. As opposed to Terraform, CloudFormation offers its private registry right out-of-the-box which enables users to manage their own code privately without the risk of others gaining access to it. Furthermore, CloudFormation’s public registry offers a wide array of extensions such as MongoDB, DataDog, JFrog, Check Point, Snyk, and more.

Despite the fact that CloudFormation has come a long way in its modularity, Terraform has innately supported modularity from the get-go, making its registry more robust and easier to use. The Terraform registry contains numerous open-source modules that can be repurposed and combined to build configurations, saving time and reducing the risk of error. Terraform additionally offers native support for many third-party modules, which can be consumed by adding providers or plugins that support the resource type to the configuration.

State Management

One of the benefits of CloudFormation is that it can provision resources automatically and consistently perform drift detection on them. It bundles AWS resources and their dependencies in resource stacks, which it then uses to offer free, built-in support for state management. 

In contrast, Terraform stores state locally on the disk by default. Remote storage is also an option, but states stored remotely are written in a custom JSON file format outlining the model infrastructure and must be managed and configured. If you do not manage state storage properly, it can have disastrous repercussions. 

Such repercussions include the inability to perform DR because drifts have gone undetected, leading to extended downtime. This occurs if the state is impaired and can’t run the code, which means the recovery has to be done manually or started from scratch. Another negative repercussion is if the state file for some unexpected reason becomes public. Since state files often store secrets such as keys to databases or login details this information is quite dangerous to your organization if it gets into the wrong hands. If Hackers find state files it’s easier for them to attack your resources. This is an easy mistake to make since generally speaking, Terraform users that manage their own state on AWS store the files in an S3 bucket, which is one-way state files can be exposed publicly. 

To combat this challenge, Terraform also offers an efficient deployment of self-managed systems that take care of the state by leveraging AWS S3 and DynamoDB. In addition, users can also purchase Terraform’s Remote State Management to automatically maintain state files as a service.

Pricing, License & Support

CloudFormation is a free service within AWS and is supported by all AWS pricing plans. The only cost associated with CloudFormation is that of the provisioned AWS service. 

Terraform is open-source and free, but they also offer a paid service called Terraform Cloud which has several support plans like Team, Governance, and Business that enable further collaboration. Terraform Cloud offers additional features like team management, policy enforcement, a self-hosted option, and custom concurrency. Pricing depends on the features used and the number of users. 


CloudFormation templates are built using JSON/YAML, while Terraform configuration files are built using HCL syntax. Although both are human-readable, YAML is widely used in modern automation and configuration platforms, thereby making CloudFormation much easier to adopt. 

On the other hand, HCL enables flexibility in configuration, but the language requires getting used to it. 

It’s also worth mentioning some IaC alternatives that offer solutions for those who prefer to use popular programming languages. For example, in addition to CloudFormation, AWS offers CDK which enables users to provision resources using their preferred programming languages. Terraform users can enjoy the same benefits with AWS cdktf which allows you to set HCL state files in Python, Typescript, Java, C#, and Go. Alternatively, Pulumi offers an open-source IaC platform that can be configured with a variety of familiar programming languages.

Configuration Workflow

With CloudFormation, the written code is stored by default locally or in an AWS S3 bucket. This configuration workflow is then used with the AWS CLI or the AWS Console to build the resource stack.

Terraform uses a straightforward workflow that only relies on the Terraform CLI tool to deploy resources. Once configuration files are written, Terraform loads these files as modules, creates an execution plan, and applies the changes once the plan is approved. 

Use Cases

While both CloudFormation and Terraform can be used for most standard use cases, there are some situations in which one might be more ideal than the other. 

Being a robust, closed-source platform that is built to work seamlessly with other AWS services, CloudFormation is considered most suitable in a situation where organizations prefer to run deployments entirely on AWS and achieve full state management from the get-go. 

CloudFormation makes it easy to provision AWS infrastructure. Plus, you can more easily take advantage of new AWS services as soon as, or shortly after, they’re launched due to the native support, compliance, and integration between all AWS services. In addition, if you’re working with developers, the YAML language tends to be more familiar, making CloudFormation much easier to use. 

In contrast, Terraform is best suited for dynamic workloads residing in multiple deployment environments where you want to control additional systems beyond the cloud. Terraform offers providers specifically for this purpose whereas CloudFormation requires you to wrap them with your own code. Hybrid cloud environments are also better suited for Terraform. This is because it can be used with any cloud (not exclusively AWS) and can, therefore, integrate seamlessly with an array of cloud services from various providers–whereas this is almost impossible to do with CloudFormation. 

Furthermore, because it’s open source, Terraform is more agile and extendable, enabling you to create your own resources and providers for various technologies you work with or create.

Your given use case may also benefit from implementing both–for example, with multi-cloud deployments that include AWS paired with other public/private cloud services. 

Another example where both may be used in tandem is with serverless architecture. For example at Zesty, we use Serverless which uses CloudFormation under the hood. However, we also use Terraform for infrastructure. Since we work with companies using different clouds, we want to have the ability to use the same technology to deploy infrastructure in multiple cloud providers, which makes Terraform the obvious choice for us. Another unplanned benefit this adds for us is the natural segregation between Dev and Ops. Because Ops tend to be more familiar with the HCL language, it creates barriers that make it more difficult for another team to make a mistake or leave code open to attacks. 

In general, many organizations choose to use Terraform for databases and high-level infrastructure and CloudFormation for application deployment. They often do this because it helps to distinguish the work of Dev and Ops. It’s easier for developers to start from scratch with CloudFormation because it runs with YAML or JSON which are formats every developer knows. Terraform, on the other hand, requires you to learn a different syntax. The benefit of creating these boundaries between Dev and Ops is that one team cannot interfere with another team’s work, which makes it harder for human error or attacks to occur. 

It’s worth noting that even if you don’t currently use both IaC platforms, it’s ideal to learn the syntax of each in case you wind up using one or the other in the future and need to know how to debug them. 


As we have seen, both CloudFormation and Terraform offer powerful IaC capabilities, but it is important to consider your workload, team composition, and infrastructure needs when selecting your IaC platform.

Because I’m partial to open-source technologies, Terraform is my IaC of choice. It bears the Hashicorp name, which has a great reputation in the industry as well as a large and thriving community that supports it. I love knowing that if I’m not happy with the way something works in Terraform, I can always write code to fix it and then contribute it back to the community. In contrast, because CloudFormation is a closed system, I can’t even see the code, much less change something within it. 

Another huge plus for Terraform is that it uses the HCL language which I prefer to work with over JSON/YAML. The reason for this is that HCL is an actual language whereas JSON and YAML are formats. This means when I want to run things programmatically, like running in a loop or adding conditionals, for example, I end up with far more readable and writable code. When code is easier to read, it’s easier to maintain. And since I’m not the only one maintaining this code, it makes everyone’s life easier. 

Another reason I prefer to use Terraform is due to our extensive use of public modules, which we needed to leverage prior to CloudFormation’s public registry offering.

While CloudFormation may be quicker to adopt new AWS features and manage the state for you for free, all things considered, I prefer the freedom that comes with open source, making Terraform a better choice for my use case.

Hope you found this comparison helpful!

Who Moved My Code? An Anatomy of Code Obfuscation

Key Takeaways

  • Keeping programs, or technology safe is more important than ever. Combined measures, protection layers and various methods are always required to establish a good protective shield. 
  • Obfuscation is an important practice to protect source code by making it unintelligible, thus preventing unauthorized parties from easily decompiling, or disassembling it.
  • Obfuscation is often mistaken with encryption, but they are different concepts. Encryption converts information into secret code that hides the information’s true meaning, while obfuscation keeps the information obscure.
  • There are various methods to obfuscate code, such as using random shuffle, replacing values with formulas, adding ‘garbage’ data, and more.
  • Obfuscation works well with other security measures, and is not a strong enough measure on its own. 

In the bipolar world we live in, technology, open source software, and knowledge are freely shared on one hand, while the need to prevent attackers from reverse engineering proprietary technologies is growing on the other. Sometimes, the price of technology theft can even risk world peace, just like in the case of the Iranians, who developed a new attack drone based on a top-secret CIA technology they reverse engineered. Code obfuscation is one measure out of many in keeping data safe from intruders, and while it might not bring world peace, it can, at least, bring you some peace of mind.

When it comes to high-end and sophisticated technology, Iran never had the upper hand – the embargo and sanctions did not leave Iran with any technological advantage except for one: creativity. The Iranians find the most creative ways to try and stay on top. To prove our point, here’s an interesting story: in 2011, using simple signal interference, Iran hijacked an American super-secret drone: the RQ170 Sentinel, which was the state-of-the-art intelligence gathering drone used by the CIA. It took the Iranians “only” a few years to reverse engineer the Sentinel, in an effort which paid off well: it led to the production of the Iranian Shahed 191 Saegheh, which is based on the Sentinel’s technology, and was recently sold to Russia. 

What can programmers, technology vendors, and governments do to keep their technologies safe from the sticky fingers of malicious attackers who want to reverse engineer valuable technologies?

Keeping programs, or technology safe, is the same as keeping your house safe from burglars: the more valuables you have, the more measures you take to protect them, taking into account that in most cases, no one can guarantee your home is 100% safe. Same goes with protecting source code: We want to prevent unauthorized parties from accessing the logic, or the “sauce secrète” of our application, extracting data, cloning, redistributing, repacking our code, or exploiting vulnerabilities. 

The best security experts will tell you that there’s never an easy, or a single solution to protect your intellectual property, and combined measures, protection layers and methods are always required to establish a good protective shield. In this article, we focus on one small layer in source code protection: code obfuscation.

Though it’s a powerful security method, obfuscation is often neglected, or at least misunderstood. When we obfuscate, our code becomes unintelligible, thus preventing unauthorized parties from easily decompiling, or disassembling it. Obfuscation makes our code impossible, (or nearly impossible), for humans to read or parse. Obfuscation is, therefore, a good safeguarding measure used to preserve the proprietary of the source code and protect our intellectual property.

To better explain the concept of obfuscation, let’s take “Where’s Waldo” as an example. Waldo is a known illustrated character, always wearing his red and white stripy shirt and hat, as well as black-framed glasses. The challenge is to find Waldo among dozens or even hundreds of people doing a variety of amusing things in a double-paged illustration, full of situations, characters, objects and events. It’s not always easy, and it might take some time to parse the illustration, but Waldo will always be found in the end, thanks to his unique looks.

Now imagine Waldo without his signature stripy shirt, hat, or glasses – instead, he wears a different shirt every time, different hat, and a wig. Sometimes he will even be dressed as a woman. How easy would it be to find him? Probably near to impossible. 

Figure 1 Imagine looking for Waldo without his signature stripy shirt, glasses, and hat. Instead, he will be wearing regular clothes and a face mask.

Using the same concept, when we obfuscate, we hide parts of a program’s code, flow, and functionality in a way that will make them unintelligible – we mask them, we “twist”, scramble, rename, alter, hide, transform them, and on top of that, we pour a layer of junk. 

Good obfuscation will use all these methods, while maintaining our obfuscated code indistinguishable from the original, non-obfuscated source code. Generating a code which looks like the real thing will  confuse any attacker, whilst making reverse engineering a difficult proposition to undertake.

Bear in mind that obfuscation, like any other security measure, does not come with a 100% guarantee, yet it can come as close to it as possible if done right, especially if combined with other security measures. 

It’s important to differentiate obfuscation and encryption, often being mistaken to be the same though they are not. Obfuscation and encryption are two different concepts, and one does not replace the other – if anything, they complete each other. 

When we encrypt, we convert information into secret code that hides the information’s true meaning. When we obfuscate, the information stays as is, but in an obscure format, as we increase its level of complexity to the point that it’s impossible (or nearly impossible) to read or parse. 

A strong encryption is a strong security measure, but we must keep in mind that any lock will be open at some point. Anything encrypted must be decrypted in order to be used, which is like opening the door of the fortress – however strong, it’s still a weak spot. This is where the advantage of obfuscation comes into place: when we obfuscate, we do not encrypt, we simply hide our code in plain sight. Think of obfuscation as hiding the needle in the haystack – if done well, it will take an unreasonable amount of time and resources for an attacker to find your “needle”. 

From our experience of years as programmers and obfuscation advocates, we found out that obfuscation is a bit like Brexit – experts are either utterly for it, or passionately against it. However, let’s remember that security always requires several methods used in conjunction with one another – if one fails, the other will still be there – which is exactly why obfuscation and encryption make a good pair. Obfuscation should always come last, i.e. after you add layers of encryption, and fully debug the program, it’s time to obfuscate. 

Though this article focuses on how to create a string obfuscation tool, it’s important to point out that, in real life, commercial obfuscation tools obfuscate much more than strings – they include obfuscating functions, API calls, variables, libraries, values, and much more.

Large corporations use obfuscation for any sensitive software. For example, Microsoft Windows’ Patch Guard is fully obfuscated and literally impossible to reverse engineer. If you are a programmer, you probably don’t own fancy security tools big corporations use, and why should you? But it doesn’t mean you should not be able to protect your code using some simple and practical measures. Obfuscating strings is a good way to save you the use of expensive and complex obfuscation tools on one hand and make your code unintelligible on the other. 

In fact, if you take a typical executable and dive into it using any hex editor, or even Notepad, you may find many strings among the binary data which reveal trade secrets, IP addresses, or other pieces of information (figure 2), all in the form of strings, that you really don’t want to give away.

Figure 2: If we open an exe using a Hex editor, we can find some strings, which might give out a lot of information which can be exploited by attackers. In this case. The string “calculator” is found.

Now, let’s say your software connects to a remote server and you store the IP being used and don’t want it to be revealed. You can mask and hide the sensitive data that way. The data will only be hidden from the executable file. Of course, once you communicate with a remote server, sniffing tools will show the IP along with anything sent and received – so take that into account. We should point out that there are ways to hide both IP and data even from sniffing tools (such as Wireshark), but that’s a subject of its own. 

There is more than one method to obfuscate your code, as obfuscation itself should be implemented on several levels, or layers – whether it’s the semantic structure, the lexical structure, control flow, API calls, etc. In order to create robust protection, we must use several techniques. As the focus in this article is on string obfuscation, let’s explore four sub-methods.

The importance of being random

When we think of random numbers, we can imagine a lottery machine: the machine uses spinning paddles at the bottom of the drum, and it spins the balls randomly around the chamber. A ball is then shot through a tube, meaning that each ball is randomly picked.

You might ask: why do we need to use random elements in our code? The answer is that one of the methods to decode obfuscated data is to examine what you expect to be the logical order of things, and once we randomize this order, it’s harder to guess what the obfuscated data is in the first place. 

The big question is: can a computer program generate real random numbers without any hidden logic, which turns the random numbers into, well, not so very random? After all, there are no spinning paddles, no shooting balls, just a man-made program run by a computer. 

C++, for example, offers the library header file, and the rand() function. This library is meant to help us generate a random number, or what we might call “a pseudo random”. Why pseudo? Well, because the “random” output generated using rand() is not really random. If we use rand() to iterate while creating random numbers, then test the results statistically, we can see that past several iterations, the generated numbers fail a statistical test, as some of the “random” results can be easily predicted.

An entrepreneur named Arvid Gerstmann developed his own random number generator that is more random, and we use his library as part of the final project in our book, when we develop a mini string obfuscation tool. 

Shuffle ‘em like a deck of cards

When we obfuscate, we shuffle various elements, such as strings, functions and so on, so that their order will be (almost) random, which makes it harder to analyze if someone is trying to crack your code. Think of shuffling data as taking a deck of cards and mixing them up in a random order. We do the same with the function we will be generating.

Shuffling is changing the order of some elements in a random way (or almost random), which makes it harder for an intruder to analyze and reverse engineer our code. One of the methods to decode obfuscated data is to examine what you expect to be the logical order and when you shuffle that order, but it’s harder to guess what the obfuscated data’s order is. Of course, the aim is not to alter the behavior code, but simply work on a separate module which handles the shuffled elements as they should be handled once called.

Replacing Values with Formulas

Another method used in obfuscation is to randomly replace values with different types of formulas such as x=z-y or z=y+z. Let’s say we have the value 72, we can replace this value with 100-28, or 61+11. When the formula is x=z-y, we need z to be random but larger than y. In other words, we will insert this randomly generated formula into the generated source code instead of the original value. 

Figure 3 shows how obfuscated code will look when we insert random formulas.

Figure 3: Good obfuscation uses randomly replace values with different types of formulas such as x=z-y or z=y+z.

Adding Junk and ‘garbage’ data

Another method of concealing the content of our code, making it harder to parse and reverse engineer, is adding random junk data in between the real data. Let’s say for example that the result is a NULL terminated array – we place the NULL at the end of the string and the junk after the NULL. An obfuscated string using this method will look like this:

result[12] = L’$’;
result[0] = L’t’;
result[5] = L’5’;

Now, imagine that we assign the values of the real and junk characters in random order, so we may start with char [12] then [0], then [5], and so on, which makes it harder to understand the flow and result if examined.

Remember: obfuscated code is only as good as its weakest link. We always must test its resistance and try de-obfuscating it. The harder it gets, the stronger the obfuscation is.

Tip: Keep in mind that obfuscated source code is hard to maintain and update. Therefore, it is recommended to maintain the non-obfuscated version and obfuscate it before deploying a new version.

After having discussed a few general concepts behind code obfuscation, in the next section we will present a simple tool named Tiny Obfuscate, aimed to obfuscate strings, and which works in two modes: ad-hoc stream, and entire source code project.

Tiny Obfuscate, is a Windows application developed by Michael Haephrati using C++, was initially introduced in a Code Project article as a small Proof of Concept that can be used to convert a given string to a bunch of lines of code that generates it. 

Figure 4: The original Tiny Obfuscate interface

You enter the string and a variable name, and the lines of code are generated so they can be copied to the program and replace the original string. 

Figure 5: The advanced Tiny Obfuscate commercial version

A more advanced version of Tiny Obfuscate was actually used in real life, as part of the development of several commercial products. This version has a “Project Mode” and an “Immediate Mode”. The Immediate Mode resembles the original version from the old article, but has more features:

  • Users can select the type of string (UNICODE or wide char, const and more).
  • The obfuscated code is wrapped inside a new function which is generated.
  • Optionally: the function code and prototype are inserted into a given .cpp and .h, not before checking whether there isn’t already a function which obfuscates the given string.
  • The function call is copied to the Clipboard (either the newly generated function or an existing one, if the given string was obfuscated before), so the user can just paste it instead of the given string.
  • The generated function is automatically tested to verify that it will return the given string. 
  • Various control and escape characters are handled. These include: n, t, etc. %s, %d and so forth.
  • Comments are automatically added to keep track of the original string that was obfuscated and when it was obfuscated.


Let’s test and see how string obfuscation works with the following example. Let’s say we have the following line 

wprintf(L"The result is %d", result);

Now, we wish to obfuscate the string, which, in this case is The result is %d. We enter this string to the Immediate Mode “String to obfuscate” field:

and just press ENTER.

We will then see the following alert:

and the following code will appear (and inserted to the project’s source and header files).

The Project Mode, allows selecting a Visual Studio Solution or Project, going over all source files, selecting what to obfuscate (Variables, Function names, Numeric values, and Strings), viewing a preview of the results, then checking the obfuscated project and interactively checking and unchecking each element to get the optimal result.

The advanced Tiny Obfuscate software generates and maintains a sqlite3 database which keeps track of anything done, allowing it to revert to the original version and undoing any action done. 


In this article, we have introduced the topic of code obfuscation, with emphasis on string obfuscation. If you want to go deeper, in our book Learning C++ , (ISBN 9781617298509, by Michael Haephrati, Ruth Haephrati, published by Manning Publications), we teach complete beginners the basics of the C++ programming language, and gradually build their skills towards a final project: creating a useful compact, yet powerful, string obfuscation tool.