fbpx

A Googler’s Critique of Google’s Performance Management Reviews

Photo of author

By I Done This Support

google's performance management

This post was written anonymously by a current Google and former Microsoft employee.  It details the author’s perspective on her first-hand experience with Google’s performance management review system.

“Confidence… thrives on honesty, on honor, on the sacredness of obligations, on faithful protection and on unselfish performance. Without them it cannot live.”

–Franklin D. Roosevelt

Institutions are built on the trust and credibility of their members. This maxim holds true for employees and their employers just the same as it does for citizens and their government. Whereas the electoral process in modern democracies allows you and me to rate our government’s performance, performance rating systems make employees the subject of evaluation. In both cases, however, faith in the integrity of the process is the only thing that ensures order.

Managing a performance rating system that motivates, rewards, and retains talented employees across an organization tens of thousands large is a grueling, never-ending challenge. How does an organization balance values core to its DNA and its continued success — merit, openness, innovation, and loyalty — all while maintaining perceptions of fairness?

I’ve worked at both Microsoft and Google and seen both tech giants fight this battle with complex formulae, peer awards, and strict curves.

Numerical ratings were originally born out of a desire for precision. Performance buckets were born out of an inability to defend the precise scores. As of November 2013, Microsoft eliminated its forced curve rating system. And in April 2014, Google followed suit.

All four performance rating schemes follow a similar cadence: employees are given a rating relative to their peers on a quarterly basis. This is done in secret and potentially never shared with employees. On a semi-annual basis, summary assessments are shared with a selective set of examples (of work and behavior) that articulate and reinforce the rating. Then employees are made aware of the bonuses, salary raises, and stock grants they will be awarded. The rewards are decided unilaterally regardless of the dialogue that takes place during the review, and next chance to check in and reassess is six months away.

First-Hand Observations

As someone who has lived through cycles of the ever-evolving performance evaluation and rating mechanisms at these tech giants, a few observations emerge:

Forced curves undermine the spirit of collaboration and foster a mindset of hoarding pie instead of expanding it

There are particular specialized organizations that benefit from having a defined numerical goal. For example, a quarterly sales quota is a very clear measuring stick, as are portfolio returns, bugs resolved, or customers satisfied. But absent specific, level measures of productive output, large firms face the uphill battle of linking performance to rewards.

When you force fit a curve to the array of employee responsibilities, which vary in scope and complexity, it becomes virtually impossible for one lowly employee to pinpoint what distinguishes “good” from “poor” or “great”.

I’ve found myself asking, “Did I score well because I put in the hours or because I got an easy draw?” Or, “Is managing a profitable line of business more merit-worthy than building a floor for a failing business?”

In my experience, people managers suffer through this ambiguity just the same. Despite the wealth of data they have about their direct reports, they’re unable to articulate the rationale (or broader context within the cohort) underlying the numerical scores they assign. And in the absence of transparency or an understanding of how individual contributions compare to team success, self-preservation rules supreme.

And even with the recent moves away from strict numerical curves, there remains a finite pool of awards to be distributed, which doesn’t reflect the mentality they’re trying to foster.

Celebrating performance through evaluation cycles (quarterly, semiannually, annually) creates a sense that everyday work does not matter

The climb toward credible ratings grows steeper when you divorce an accomplishment from recognition with an annual or semiannual review. The emotional impact of a successful presentation or a new policy is nowhere to be found in a set of six-month-old notes. Worse still, seeing changes to compensation or a performance rating system in response to months old polling data address past concerns (and possibly the concerns of past employees).

Even data-rich, data-loving companies shy away from being transparent about how they arrive at individual ratings which produce a perception of arbitrary assessment and a false notion of precision

How do employees adapt and improve if they aren’t working at the trading desk or privy to examples of exceptional performance? They turn to Glassdoor, HR brochures, or worse of all, personal anecdotes to bolster their own assessment of whether they are receiving a “fair” deal. Unfortunately, not one of these third party sources has the nuanced understanding of an employee or his/her team necessary to provide context. What’s often left is a broken, trust-less relationship.

Performance rating systems are reactive and intended to buoy the ship against alarming trends in survey data and rates of attrition; improvements and tweaks are subject to lengthy implementation cycles

How to Do Better

So what can these firms do to win the war for credibility? Be transparent. Throw open the doors and share the notes. Make measurement and compensation public. Have peers drive the rating process. The power of transparency is well understood. There are already measures in place to build engagement among employees and alignment within teams:

• Empowering employees to reward one another

• Have everyone share in company profits (e.g. stock awards or profit sharing)

• Create awards for exceptional team performance (e.g. working across divisions or elevating the division through combined efforts)

• Pool risk vertically (e.g tying manager performance to team performance)

Increased context and knowledge builds comfort and trust for employees and managers alike. When employees know how they’re measured, there’s less room for suspicion. And when they know can connect the dots between individual performance and team success, there’s greater job satisfaction.

Ultimately, the goal of a performance rating system is to reward and retain capable employees by keeping them happy and feeling like they have a fair deal.

Transparency goes a far way toward lending credibility to the process and building commitment to the company, but it isn’t a silver bullet. Giving employees greater flexibility in what they take on and the efforts they lead also builds a sense of ownership and commitment. Opportunities such as 20% projects (wherein employees spends 20% of their time working on something about which they’re passionate) or cross-organizational initiatives (e.g. building a volunteering program) are excellent examples of empowering employees through choice. But there’s room for this notion of self-direction to go even further — a completely open allocation (e.g. 100% self-directed time) or letting employees choose their manager are two programs I would certainly sign up for.

What it boils down to is that employees want to know how they are being evaluated and want to know that they’re making conscious choices. Because while you vote with a punch card at the election booth, in the workplace you vote with your feet.

Photo: Denis Cappellin

GET DONES DAILY

Boost Your Productivity In 5 Minutes

Get daily tactics, insights, and tools to get more done.