Thursday, February 14, 2013

Speaking Up to Hierarchies

Getting decisions made with enough sources of data is important for all of us. I have just come across another example within the medical field, where the impact of this is perhaps more accute since it can end up being a life-or-death issue, or at least a quality of life issue for many people.

While many of us think of doctors as highly intelligent and highly trained -- which they are, we usually then assume that they will be in positions to generate incredible leverage in decisions on patient care -- which they do. However, in important clinical settings there are groups of doctors involved in decisions about care; in these situations it turns out that issues of status and hierarchy play out as they do in many organizational contexts. Questioning the "assumed standard" of a brilliant researcher or a senior surgeon becomes very difficult, according to two recent posts, here and here.

image: Dartmouth Medicine, 2004
Here is an exerpt from the first post, by Dr. Pauline Chen in a NY Times medical blog:
Even as some clinicians attempt to compensate by organizing multidisciplinary meetings, inviting doctors from all specialties to discuss a patient’s therapeutic options, “there will inevitably be a hierarchy at those meetings of who is speaking,” Dr. Srivastava noted. “And it won’t always be the ones who know the most about the patient who will be taking the lead.”

It is the potentially disastrous repercussions for patients that make this overly developed awareness of rank and boundaries a critical issue in medicine. Recent efforts to raise safety standards and improve patient care have shown that teams are a critical ingredient for success. But simply organizing multidisciplinary lineups of clinicians isn’t enough. What is required are teams that recognize the importance of all voices and encourage active and open debate. (my emphasis)
Since their patient’s death, Dr. Srivastava and the surgeon have worked together to discuss patient cases, articulate questions and describe their own uncertainties to each other and in patients’ notes. “We have tried to remain cognizant of the fact that we are susceptible to thinking about hierarchy,” Dr. Srivastava said. “We have tried to remember that sometimes, despite our best intentions, we do not speak up for our patients because we are fearful of the consequences.”
As Dr. Chen notes, there is the need to build in ways that medical teams encourage active input of all points of view in spite of fear of the consequences to rank and status. Sound familiar?

Here are some methods to encourage this:
  • setting team agreements on debate, multiple POVs
  • modeling by senior staff
  • active after-action review of gathering/listening for points of view and the effects on care
  • checklists for potential contraindicators.
What does your organization do to make sure questions are considered before critical decisions?

Tuesday, December 4, 2012

How to get more outcomes out of meetings

I am thinking about a recent report I put together for a consultant client; he asked his client organization a set of questions about their executive team -- how well they worked at making good decisions, whether they based their disagreements on actual data, whether they were willing to challenge each other directly, whether they were able to come together after a decision and fully support the agreed direction. We compared their ratings on such questions across three areas: a) each executive rating him or herself, b) executives rating their peer team, and c) directors -- their direct reports -- rating the executive team.

The results showed that many crucial areas of challenge and confrontation were decidely low, even though each individual rated him/herself fairly high. That is, both the directors and executives agreed that the exec team as a whole was lacking in sufficient discussion of disagreements leading to a robust decision, and that the aftermath (poor outcomes) was not reviewed for organizational learning.

I think we might find similar results in many executive teams who are under pressure, and have not had time to become fully formed by trial-by-fire. The question of course is what to do in this situation? There are many reasons why this kind of stasis exists, and these reasons make it not so easy to simply 'make a change.'

My thinking runs to what mechanisms could then support a course correction in how the team works together.... here is some thinking from a couple of researchers writing on the HBR Blog (co-authors of a recent book on Heart, Smarts, Guts and Luck.) The short article is on the Courage to Be Direct.

So how could we develop more ways in which to facilitate direct input, especially when making a decision that counts? Well, if people have trouble saying things simply and directly in a meeting, we can always gather opinions online, in response to a couple of careful, direct questions. Then we see all the opinion in one place, individuals are not put on the spot, and can look at the differences together.

We can ask a series of confidence check questions, after we have supposedly made a decision. For each element of the decision, on a scale of 1 to 10, how confident are you that this plank in the raft will effectively help us reach our agreed destination? If you rated this item less than 8, what specifically would need to be changed to increase your confidence? In my experience, this will bring out the places where further discussion and data gathering should take place before acting.

Instituting periodic after-review of decisions can be another useful mechanism... again, this can be structured with online input, so that data already exists together when the group meets. Were we successful? was our decision correct? what elements did we miss? did we fully support the decision with resources from multiple areas? looking back, what should we change now?

Keeping an archive of such reviews and confidence checks allows us to determine patterns over time in our process and our learning.

Wednesday, August 31, 2011

Asking Good Questions

I just read an interesting post in HBR: The Art of Asking Questions, by Ron Ashkenas. He is talking about the need as a manager to inquire into your team's thinking without stopping their momentum. He also suggests 3 types of questions to ask:    
  • about yourself
  • about plans
  • about the organization
These issues then beg the question: how do you frame questions in order to build your group's intelligence? They also suggest another possibility: could there be a mechanism for regularly asking the group questions, in order to gather important knowledge about how we are doing? In other words, you don't have to only ask questions at the decision point, or when you have concerns about direction (although Ashkenas' point is find out first rather than direct.)

It is worth building in a method for finding out how the emperor's clothes are doing; it is worth knowing how the project and the organization are being perceived by different stakeholder groups. And how those questions are framed can make the difference between interaction and actionable knowledge on the one hand and tepid opinion and further apathy on the other.

Using collaborative solftware allows your group to respond to such periodic assessments, and to help make sense of the combined answers. It allows these results to be developed on a level playing field, where there is no concern for 'who said what' -- just the ideas playing out against each other.




The Value of Feedback

This is basic stuff, but it is so easily overlooked in the everyday life of an organization. What do your customers really think? How are the actions of your leadership team perceived by others? Are we really going to deliver our next product feature on time? Are we on some collision course, and don't know it?
I was just reading about Robert Kaplan's new article in HBR this month: What to Ask the Person in the Mirror. He discusses the increasing importance and difficulty of getting an assessment of how you are doing as a leader (as you rise in a hierarchy), and advocates disciplined self-reflection in seven areas. And the importance of getting accurate feeback from your employees.
I remember a time when I was running a fast-growing company, and how important it was to have real advice from someone within my organization, who was unafraid of telling me how different the perceptions were down in the ranks than my outward-facing, change-driven priorities. She insisted that I spend the time (while I argued I didn't have it) to meet with everyone, explain the context (again!, from my perspective) and listen (to concerns that seemed to me dwarfed by customer requirements and structural shifts.) I needed to have that perspective -- from outside my immediate view. The organization needed to have me balance that feedback with the outer drivers, in order to build an effective change possibility.
We had a structured process for meeting with our key customers as a group on a regular basis, to review developments in our field, talk about areas of concern from their various vertical perspectives (policies, regulations, quality and costs), and also to build a sense of partnership. These were not always easy meetings -- divulging a problem or a special concession to a group of powerful corporate gatekeepers. But they served an important purpose of providing context for changes we needed, as well as helping our customers maintain perspective about their programs as we provided them.
There are many vehicles for getting valuable feedback. 360s, customer assessments, internal scorecards, employee culture surveys, prediction markets, confidence checks on strategy planks. Collaborative software makes these easy to set up -- the key is having the discipline and the will to ask for the information.
Think about your own situation -- how do you get feedback? What information should your organization know, but you don't? Who is going to tell you what you might not want to hear, and how is that going to happen?

Deeper Understanding Leads to Effectiveness

My partner here at GroupMind sent me an interesting article the other day: the author proposes looking at collaboration platforms within organizations in three modes (by "platforms" he is speaking about sets of practices and systems):
  • exploration
  • experimentation
  • execution
It is written by Satish Nambisan for the Stanford Social Innovation Review. He says,
"Collaboration platforms can help dismantle the long-held barriers between government, business and non-profit sectors. They also speed the cross-fertilization of innovative ideas and solutions throughout the sectors."
"To be effective partners in social innovation, organizations need a deeper understanding of these three platforms so that they may develop the necessary skills and resources."
I think Mr. Nambisan has pointed our a valuable framework for thinking about what we are doing with collaboration. You see a lot of "cool" widgets out there -- the real issue is to understand the overall context for initiating collaboration within a department or across the organization. What are the larger goals we are connecting?
By making this framework explicit, the organization gains some clarity for the practices it promotes with its call to collaboration; the designers of the processes get a more specific context for their processes and the sponsors have a clearer expectation for outcomes. The users, of course, can still do whatever they want, but hopefully the fledgling enterprise has focused the efforts of all involved at the very start.
For those considering adding collaboration capacity for a project or a part of an organization, I see this framework as providing a useful guide in thinking through various aspects of the idea:
  • tools
  • metrics
  • involvement of stakeholders
  • timeframes
  • goals
If you look at several collaboration projects you know about, consider whether applying this framing would have helped to clarify what should have been going on. I believe there is a rich vein to be mined in understanding many social innovation tools through these glasses.

The power of iteration

Doing a recent 2-day engagement with a group, I was struck by the power of repeating a simple process again and again, as an effective learning and expansion method. The group (who were all new to online collaborative work) were in a room together in NYC while I was here in California, talking over a Polycom. We used a very simple input device repeatedly over the course of the day -- a simple brainstorm tool that showed new content arriving automatically, at the top of the list. As the day progressed I added additional features, such as my inputting ideas into an extra column to the right of their list so their refined summary themes showed up as they talked through their collective thinking. Later we used several categories for input, so that if you were simply watching the page the list starting building out in six different categories at the same time. (A complex planning process evolved easily as an extension of their simultaneous talking and writing on the page.)

In a small-group breakout, we had 4 to 5 people work in each one of the categories, refining down a list of issues from 20-30 or so to their top 6 goals. Again, we used the same format, but introduced the concept of moving the items within their category up and down on the list, along with a dotted line; in this way they could easily move their key issues "above the line" while still referring to all the items on their list. By this time, they had seen items move around in previous lists, and were now familiar with how to work these features themselves, and the planning progressed faster, with no drag from trying to figure out the technology, and no need for the whole room to work on each category one-at-a-time.

The two days resulted in the completion of a complex strategy session, through six separate steps, with specific stretch goals set up in six categories, underscored by an overall purpose statement and a set of shared values. We probably doubled our speed of completion of process steps, and we achieved full participation by all members, with a robust diversity of thought, plus we were able to review and understand all the material the group generated across the two days. The technology enabled this, but the strength of the process was the iteration in various levels of complexity of a simple input step which involved everyone, and which taught people new collaborative skills almost as a transparent side-step.

Although I do this all the time, this meeting was one of those "oh yeah!" moments of seeing learning in action, and appreciating the value of simplicity and repetition.

Second Takes: what do we perceive?

Here is an interesting set of pictures, playing with perception.... (from www.neuromarketing.com) . One take away from this is a reminder that it is possible to have differernt people see the same thing in different ways. Seems to me it is also possible for me to look at something again, and then see it differently. How do we build this possibility into our decision and planning processes? How many times do we check for another view (how many strategic planning cycles are affected by first impressions, or the strongly held views of a few key people?)

Find more photos like this on Neuromarketing


Find more photos like this on Neuromarketing