- Over the years, managers have fine-tuned their understanding of metrics such as “queries per second” or “request latency” to gauge the impact and expertise of backend engineers.
- Establishing universally accepted impact metrics for client-side engineers remains a challenge we have yet to overcome.
- The notion that client-side engineering might be less relevant as a discipline is losing weight, but an unconscious bias persists.
- A comprehensive grasp of impact metrics for client-side engineers will eliminate personal biases from promotions and performance assessments.
- It’s crucial to discern what these metrics convey and what they omit.
When people managers assess the performance of software engineers, they often rely on a set of established metrics, believing they offer a meaningful representation of an engineer’s impact. However, these metrics sometimes fail to provide a complete and nuanced view of an engineer’s daily responsibilities and their actual contribution to a project.
Consider this scenario: an engineer makes a change to a critical component of a product used by millions. On paper, it appears they’ve impacted a substantial user base, but the reality may be completely different.
Indeed, while most performance assessment guides try to enforce metrics that can be directly tied to an individual, there is often a lack of clarity and understanding of what these metrics truly represent in the broader context of the engineer’s role and skills.
This deficiency is particularly pronounced in evaluating the impact of client-side engineers. The metrics used for their assessment are not as well understood as those used for their server-side peers, creating thus potential gaps in evaluation.
This article will delve deep into metrics that can be used for assessing the impact of client-side engineers, offering insights into what they mean and what they don’t.
Our aim is to provide a more comprehensive perspective that can be useful when developing performance assessment guides for organizations building full-stack software, ensuring a more balanced and fair evaluation of engineers’ contributions and impact.
What This Document Is, and What It’s Not
Most performance assessment guides available today pivot on a few foundational elements to bring structure to the assessment of engineers. These elements, while expressed differently across various organizations, hold a consistent essence.
- Firstly, engineers are generally assessed on the basis of their impact, or another term synonymous with impact. The evaluation begins with measuring the ripple effect of their work and contributions.
- Secondly, as practitioners of computer science, engineers are anticipated to untangle complex computer science issues to endow the business with a durable advantage. It’s a quiet understanding that problem-solving prowess is at the heart of their role.
- Thirdly, the silhouette of an engineer’s responsibilities morphs with varying levels of seniority. As they ascend the corporate ladder, their influence and leadership seamlessly integrate into the evaluation framework, becoming significant markers of their growth at senior echelons.
While most rubrics also include evaluations based on teamwork and other similar attributes, these are typically less contentious and more straightforward to calibrate across engineers working on diverse aspects of the stack. Hence, this document will not delve into those aspects, keeping the focus firmly on the aforementioned elements.
The following sections focus on a number of metrics we believe could be used to assess the performance of client side engineers. With each metric, we highlight the associated engineering impact, discuss the inherent technical intricacy, and offer examples to demonstrate how contributions can be effectively contextualized using these parameters.
Adoption / Scale
Let’s address the elephant in the room first. A prevalent impact metric for gauging the body of work accomplished by client-side engineers often orbits around the adoption, engagement, or retention of the feature they developed
Now, pause and ponder. Boasting product metrics like installs or DAU, might not always reflect the engineers’ brilliance (or perhaps, sometimes they do?). Its crucial here to fine-tune the calibration of assessment metrics across different teams. It’s essential to evaluate it in tandem with other impact metrics used for backend engineers, which may, again, not always echo their expertise but predominantly highlight the product’s growth.
But don’t be led astray. Yes, there exist substantial engineering challenges intertwined with the scale that these metrics showcase. Yet, it’s paramount to remember it’s the overcoming of these challenges that should be the yardstick for their performance assessment, not merely the growth or the flashy numbers themselves.
|Product Metric||Why is it important?||What it’s not||Examples|
#[Daily|Monthly] Active Users
#Day [7|30] retention
A whopping number of app installs or usage typically means a few things:
– It suggests a meticulously designed, universal implementation, particularly on the Web and Android platforms. Navigating the intricate maze of diverse lower API and browser versions on these platforms is indeed a commendable achievement in itself.
– It signals an ability to function effectively across a spectrum of geographic locations, each with its unique internet connectivity, privacy/legal mandates, and phone manufacturers.
– It better matches the fragmented landscape of Android hardware, as well as the multiple form factors existing on Apple’s platform (MacOS, tvOS, watchOS etc).
– It underscores the ability to iron out nuanced bugs on obscure devices and browser versions.
– It emphasizes the critical role in safeguarding the ecosystem’s health, for apps that are truly ubiquitous (i.e., billion installs), and potentially capable of causing system wide catastrophe.
– Contrary to popular belief, a billion installs doesn’t inherently measure a client-side engineer’s prowess in making burgeoning products.
– In a more light-hearted vein, it’s akin to building a backend API that serves (say) 500K queries per second. While it’s impressive, it’s not the lone marker of an engineer’s capability or the definitive gauge of the product’s overall vitality and growth trajectory.
– Without the trusty sidekicks we call quality metrics (outlined below), the #installs metric is a bit like a superhero without their cape. Sure, it’s flashy and might get you some street cred, but it’s hardly enough to truly save the day. Alone, it mostly just flaunts product growth and lacks the depth to genuinely showcase impact. So, let’s not send it into battle without its full armor, shall we?
– Mitra enhanced our text editor to function on niche Android OEMs, expanding our user base by 1% for our 100M DAU product.
– Akira’s commitment to web standards streamlined our transition from a 2K-user private preview to a 1M-user public preview across browsers.
– Mireya’s design of our core mobile functionality in C++ allowed us to launch the iOS app just two months after the Android, resulting in an additional 1M DAU.
– Ila’s deep knowledge of Apple platform APIs enabled us to roll out an app version for Apple Silicon to 100K users within two weeks of its WWDC announcement.
– We boosted our CSAT score in India by 3% thanks to Laya addressing specific bugs impacting thousands of Android users on devices without emulation capabilities.
– Amal’s optimization of local storage was key to having power users be highly engaged in the product for 30 consecutive days without running out of disk space.
App Health and Stability
Among all metrics that highlight the challenges of building client-side applications compared to backends or APIs, app health and stability stand out the most. Rolling back client-side applications is inherently difficult, and they often require a slower release cadence, especially outside of web environments. This sets a high standard for their quality, stability, and overall health. Additionally, the performance of client apps can subtly influence API backends. Factors like caching, retrying, and recovery in client apps can directly correlate with essential metrics in application backends.
|Metric||What it is||What its not||Examples|
A reduction of crashes (or greater than 99% crash-free users) represents:
– The demonstration of foresight and rigorous engineering to deliver top-notch client-side software, which is inherently more difficult to roll back than backend development.
– An adherence to the evolving best practices on the Web, as well as on Apple and Android platforms.
– The capability to construct software that operates seamlessly in diverse computing environments, especially for Android. This highlights the team’s technical versatility and in-depth understanding of various computing landscapes, ensuring optimal functionality and user experience across all platforms.
An application may be crash-free also because it is intrinsically simple. These metrics need to be calibrated alongside the number of flows and features that the product supports.
– Maya has reduced the crash rate in sign-in flow supporting 4 different identity providers from 10K crashes a day to under 1K.
– Most production-grade applications have a complex start-up path that involves the initialization of not just the required business logic, but also a complex set of dependencies. Having a snappy start-up experience is critical for retaining users. Any substantial improvement to the P95 of cold start demonstrates meticulous profiling, building hypotheses, and rigorous experimentation.
– An application that has a reasonable memory footprint, indicates the adoption of engineering best practices such as loading entities efficiently and reusing objects when appropriate. This is especially critical on Android platforms that have a lot of lower-end devices.
– A developer pain point in native mobile development compared to web engineering is the relatively long compilation times. Engineering effort that can make a significant dent in bringing the compilation times down, will have a multiplicative effect in terms of developer productivity. Among other strategies, this can be done by caching build artifacts, auditing dependencies to eliminate irrelevant ones, and using dynamic linking of libraries.
– Alice reduced the P95 of iOS cold start by 25% in the past 6 months. Alice did this by carefully profiling the start-up path and deferring the initialization of libraries that were not needed for the initial screen to load.
– Yu rewrote the view model cache. This resulted in better memory utilization and reduced OOM crashes in the past 6 months by 15%.
A modest number of hotfixes typically signals:
– Stable, high-caliber releases that stand robust against the tides of user demands and technical challenges, showcasing a commitment to delivering exceptional and reliable software solutions.
– Thoroughly conceived experiment flags, highlighting a strategic and considerate approach to feature testing and implementation, further strengthening the software’s resilience and user-centric design.
– Kriti designed an experimentation framework on the client that allowed us to remotely configure client parameters on the backend without re-releasing apps, while not violating App Store and Play Store policies. We’ve gone from doing 6 hotfixes per quarter on average to under 2.
|#Errors on backend (4XX) (and other backend metrics)||
Where relevant, a decrease in backend error rates could suggest:
– Corrections in client implementations, reducing faulty RPCs. This indicates improved system communication and coordination.
– Improved client configurability, allowing for parameter adjustments post-launch, ensuring better performance and responsiveness to project needs and issues.
– Indra went through a painstaking process to understand what scenarios caused clients to send incorrect parameters and spike 400s on our backends. This has reduced the false alarms on our on-call rotations by over 30%.
Client applications are the main touchpoint for users across a majority of online apps. While this section covering product excellence may appear as a “catch-all” , the aim is to emphasize the direct connection between quick releases, accessibility, prompt bug resolutions, and overall customer satisfaction.
|Customer Focus / Product|
|Metric||What it is||What it’s not|
While there’s a potential for this to be seen as a vanity metric, it typically carries substantial significance for open-source client apps confronted with issues reported by users. This metrics underscores the importance of addressing and resolving these issues to maintain and enhance the project’s reliability and reputation in the open-source community.
– For teams used to declaring bug bankruptcy, this metric loses its efficacy. It falls short as a reliable measure of performance and improvement when bugs are regularly written off en masse without resolution.
– Additionally, the metric operates on an assumption of good faith.
Frequent releases can imply
– Consistent bug fixes and improvements, showing a commitment to refining the product and ensuring its reliability and effectiveness.
– Enhanced releasability, overcoming historical challenges and demonstrating an improved and more efficient release process.
– Diligent efforts to stay in sync with the rest of the ecosystem, including dependencies and platform updates, ensuring the product remains up-to-date, secure, and compatible with other elements of the ecosystem.
– A significantly high number of releases can also stem from instability, indicating that frequent updates are needed to address ongoing issues and ensure the product works as expected.
– This perspective underscores the importance of balancing release frequency with product stability to avoid overloading the team and the end-users.
Customer satisfaction surveys, albeit generic, are often profoundly influenced by client applications. Apps stand as the first, most tangible, and recurrent interaction customers have with a service.
– A subpar experience can indelibly etch a negative impression, proving hard to erase.
– On the flip side, a stellar app experience can compensate for deficiencies in service features, performance, and pricing, leaving a positive and lasting impact on the customers, fostering loyalty and satisfaction.
|Accessibility / Usability metrics||
– Efforts to build an accessible, inclusive product that can be used by people with disabilities in hearing, vision, mobility or speech.
– Usability is often reflected as a reduction in support costs for the product
For a while, the perception of being a client-side engineer was that you were not as hardcore as some of your backend counterparts. Backend engineers often avoided client-side opportunities, as client-side engineering was seen as a minor, easier form of software engineering that prioritized vanity over correctness and software quality. While this view has significantly shifted over the last five years with the rise of serverless applications and SaaS backends, remnants are still there.
Even as these perspectives are on the path to correction, it’s crucial for us as people managers to ensure that our personal biases do not impact our decision-making, especially when our decisions profoundly affect the careers and well-being of client-side engineers. The metrics discussed here aim to offer a foundational point for ensuring an organization that is more equitable to client-side engineers.