Our Privacy Policy has changed. Click HERE to review the change. By using this Site or clicking on “OK”, you consent to acceptance of the change.

OK

The Use of Organizational Assessment Reviews to Improve Organizational Performance





Summary from October 20,2017 EOM Panel Meeting

Background

The Office of Management and Budget has invited NAPA to share its insights on how to leverage the growing body of data on the organizational health and performance of federal agencies and programs, and how this data might be acted on to improve organizational health and performance at the unit level.  The EOM panel convened public and private sector leaders to share their experiences.

Questions explored include: Are there existing models of standards for assessing unit level organizational health and performance?  How do other organizations conduct such assessments?  If they find a need for improvement, what approaches are used to foster change?

Objective

Understand how different organizations assess and improve their organizational health and performance, ideally at the front-line unit level:

  • What is the objective of the organizational assessment?
  • What criteria are used when conducting an assessment?
  • What data are available or are collected to make the assessment against the criteria (quantitative vs. qualitative)? 
  • Who and how are the data analyzed?
  • What is the approach used to undertake an assessment (e.g., statistical review; in-person site visit?)
  • Who and how are results followed up with the unit after an assessment?

Presentations

Veterans Employment and Training Service (DOL) Efforts to Improve Employee Engagement.

VETS has a total staff of about 250, with a $275 million budget.  They post counselors in DOL-funded job centers around the country, and have offices in each state.  Most state offices only have one or two persons; regional offices have 6-7 staff. 

In 2012, the organization was in disarray, its political leadership was replaced, and its scores in the governmentwide Federal Employee Viewpoint Survey (FEVS) were low.  About 90 percent of staff are veterans and are very committed to their jobs but felt they were not supported. In 2013, the Secretary set a target of 5 percent improvement in FEVS scores across the department in its annual operating plan.  There were a number of departmental initiatives, and there was strong secretarial support.  The departmental approach encompassed three elements: better performance measurement; improvement in FEVS scores for engagement; links to a “learning agenda.”

The new leadership in VETS used an integrated leadership approach to better engage employees, involving:

  • Better communication with staff via quarterly all-hands video links; the goal was to just get started talking.
  • Targeted training investments, and greater transparency of who is getting trained for what, based on employee individual development plans.
  • Reviewed the agency’s mission, vision, values.

In 2014, VETS’s service delivery model was changed, which required changes in internal processes:

  • Delegated increased authority to the regions to run themselves (in order to mitigate the complaint that you can’t build trust from afar), with headquarters oversight.
  • Brought in regional staff as detailees, who helped develop tailored regional plans.  This resulted in data more useful to different offices and regional administrators.  They combined FEVS with other data sources to allow regions to tailor their approaches.

By 2017, VETS had increased its employee engagement scores on the FEVS survey by 30 percent, ranking fifth among all the units in the Department of Labor, and it was close to being in the top quartile, government wide.

Government of India’s Use of Results-Oriented Frameworks and Performance Agreements.

The government of India created a performance management framework in the mid-2000s that employed a series of “Results Framework Documents,” (RFDs), which were performance agreements between the leaders of the 80 largest agencies and the Cabinet/Prime Minister.  The RFDs were the basis of the performance contracts between the agency ministers (political) and the top career Secretaries in the agencies.

The focus was largely on improving systems within the agencies (which are judged to be about 80 percent of the drivers of performance). The elements of the RFD were converted into an index so agencies could be compared. The RFDs specified:

  • Main objectives to be achieved
  • Actions to be taken to meet the objectives
  • Metrics that would demonstrate the extent of progress being made.

Third party evaluators were used to assess the quality of the performance agreements (e.g., were they stretch goals vs. easy-to-achieve goals), and they assessed the scoring of the progress reports at the end of the process.

The new framework approach resulted in some specific changes:

  • Agencies began to prioritize among many competing objectives
  • Agencies moved from single-point targets to be met to an assessment of progress
  • Benchmark competitions emerged between agencies
  • The approach incentivized performance; the performance bonuses of executives were tied to departmental scores

U.S. Army’s Use of Operational Assessment Reviews

The Army uses a variety of assessment techniques to manage performance and the attainment of objectives. "Effects-based” assessments help evaluate the Army’s attainment of strategic objectives. These typically focus, on a 5-10 year time period. The Army uses both quantitative performance metrics and qualitative surveys (e.g. command climate surveys) to assess organizational effectiveness.  It assesses unit- or command-level processes:

  • The sequence is:  Plan, define, operate, monitor, assess.
  • Definitions of what constitutes “success” are determined up front in a cooperative effort between planners, analysts, and subject matter experts responsible for evaluating each metrics. These include both “readiness” and “strategic accomplishments” metrics.
  • The assessment teams conduct organizational command climate surveys, along with other tailored surveys.
  • A major project, such as a strategic campaign plan, would take 6 months to develop a framework, followed by 3 months of data collection.  In the case of the Pacific Command, a tailored organizational assessment led to a flatter command structure.
  • The Army is leading many efforts to effectively collect and evaluate big data. The Army Analytics Group is creating a “data lake” with all Army military personnel data as part of the Human Capital Big Data initiative, one of these big data efforts. Data in this “data lake” can be analyzed to assess trends in recruitment and retention, and determine the effectiveness of Army incentive programs.
  • Army ORSAs (Operations Research and Systems Analysts) lead a majority of the assessment efforts in the Army. There are about 600 active duty ORSAs at present, and an equal number of civilian ORSAs, working to solve problems for the Army at large.

Private Sector Examples Shared by Gartner.

Following are three private sector examples where a data initiative was undertaken in response to specific business needs identified by a company’s top leadership.

  • “Hadrian” is a large telecommunications company with 30,000 employees.  It undertook a cybersecurity initiative, which empowered employees to be part of the security process to identify breaches.  The approach used employed a rating analogous to a credit rating to rate security/access to IT systems.
  • Intel Corporation had a large, federated collection of data.  They had difficulty in comparing apples-to-apples until they developed a dictionary of standard definitions for those data seen as having “enterprise-wide value” and required all sub-units to use those definitions [may resemble the effort to respond to the DATA Act’s requirements ]. . . . They identified 18 key drivers of performance, in three buckets, and then ensured that all managers understood these metrics so they could use them when making decisions.
  • First Group (a transportation company) focused on seven categories of business capabilities they felt they needed to manage in order to grow. They focused attention on creating “data health” since they knew their data were not useful for decision-making.  They had poor data collection processes, poor integration of data from different sources, low usability, and timeliness.
    • Data needed to align to key business capabilities (e.g., maintenance of buses) and the sub-capabilities of these key capabilities (e.g., the maintenance of buses, the availability of spare parts in their garages, etc.).  Their overall KPI was to increase the percent of buses that were operational.
    • They surveyed frontline employees on the “health” of the data they generate (e.g., they found that the processes for entering data was time-consuming so it was often skipped; in response, management changed the data entry process to make it much easier).  Data health rose.

Discussion

  • Need to be able to demonstrate a link between employee engagement and improved performance, or this initiative will only be seen as an “employee happiness” effort.
  • Commerce and Treasury have analytic assessment units, similar to the Army.
  • Can organizational assessment reviews identify opportunities for innovation?
  • A leading indicator for organizational health, available from the FEVS, is the state of “leadership ethics.” If employees perceive poor ethical behavior, this often foreshadows a decline in overall organizational health and performance, sometimes leading to visible scandal.
  • A “new frontier” is assessing whether the “fix” is a change in organizational culture vs. a change in the organization’s leaders.
  • What is the impact of the blended workforce on the workplace?

SCROLL TO TOP