Tag Archives: Evaluation

Learn About Indiana’s Youngest Children with the 2019 ELAC Annual Report!

2019-elac-annual-reportIndiana’s Early Learning Advisory Committee (ELAC) released its 2019 Annual Report. Each year, ELAC completes a needs assessment on the state’s early childhood education system and then recommends solutions.

We want to share some quick highlights and key takeaways from this year’s needs assessment.  ELAC focuses on ensuring early childhood education is accessible, high-quality, and affordable to all families. 

Are Children Ages 0-5 Receiving High-Quality Care?

  • Of the 506,257 children in Indiana ages 0-5, 64% need care because all parents are working. This includes both working parents who are single and households where both parents work outside the home. Figure 9
  • Of those children who need care, only 40% are enrolled in known programs. The other three fifths of children receive informal care—from a relative, friend, or neighbor.
  • Of the young children who need care, only 16% are enrolled in high-quality programs. A high-quality early childhood education program not only ensures that children are safe, but also supports their cognitive, physical, and social-emotional development. 

Are Children in Vulnerable Populations Receiving High-Quality Care?

  • Indiana makes funding assistance available for early childhood education for children from low-income families.
  • Indiana does not collect data on children in other vulnerable populations, such as children in foster care and children affected by the opioid epidemic.
  • Overall, due to lack of data, Indiana does not know the kind of care received by children in vulnerable populations.

What Trends Are There in Early Childhood Education?

  • Since 2014, Indiana has made progress by enrolling more of the children who need care in known early childhood education programs. 
  • Over the past 5 years, Indiana has consistently enrolled fewer infants and toddlers than preschoolers in known and high-quality programs. Figure 31
  • Compared to 2012, more early childhood education programs are participating in Paths to QUALITYTM, Indiana’s quality rating and improvement system.
  • In addition, significantly more programs have earned high-quality designations of either Level 3 or Level 4 since 2012.

What Trends Are There in the Early Childhood Education Workforce?

  • Indiana’s early childhood education workforce is more diverse than the K-12 workforce but not as experienced.
  • Nationally, the early childhood education workforce earns $4-$7 less per hour than the average hourly wage of all occupations.

What is the Unmet Need in the Early Childhood Education System?

  • There has been a persistent need in early childhood education programs for more available spots for infants and toddlers.
  • Despite overall improvements, there are still some communities in Indiana with no high-quality early childhood education programs.
  • The tuition cost of high-quality early childhood education programs remains unaffordable, and the available financial assistance for low-income families is insufficient.

How Can I Find Out More?

  • Read the 2019 ELAC Annual Report, which includes statewide data on Indiana.
  • ELAC also publishes an interactive dashboard that allows you to learn more about specific data points. You can also easily present data to stakeholders.
  • The interactive dashboard contains both state- and county-level data. Use the map to select your county, and hover over the data to learn more!

2019-elac-interactive-dashboard

Transform Consulting Group is proud to support ELAC’s work by pulling this needs assessment and interactive report together!

Does your organization, agency, or coalition need to better understand your community or a key issue, but you don’t know how to get started? We are skilled in collecting quantitative data from multiple data sources and pulling it together in a visually-appealing, user-friendly report. Contact us to learn how we can help you complete your next needs assessment!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

5 W’s of a Process Evaluation: Part 2

In a recent blog post, we introduced the first two W’s of a process evaluation:

  1. Why conduct a process evaluation
  2. Who should conduct a process evaluation

This blog post will cover the remaining three W’s:

  1. What methods to use to conduct a process evaluation
  2. Where to conduct a process evaluation
  3. When to conduct a process evaluation
WHAT METHODS TO USE WHEN CONDUCTING A PROCESS EVALUATION

There are several different data tools and methods you can use during a process evaluation. It may be helpful to use a combination of these methods!

  • Review documentation: It can be helpful to review staff logs, notes, attendance data and other program documents during a process evaluation. This method will help you to assess if all staff are following program procedures and documentation requirements.
  • Complete fidelity checks: Many programs/curriculums come with fidelity checklists for assessing program implementation. This is especially important if you are implementing an evidence-based program or model. Programs may have a set number of required sessions and guidelines for how frequently they should occur. You can use fidelity checklists to assess if the program’s implementation is consistent with the original program model.
  • Observe: Observations can be especially helpful when you Y Observationshave multiple sites and/or facilitators. During observations, it’s crucial to have a specific rating sheet or checklist of what you should expect to see. If a program has a fidelity checklist, you can use it during observations! If not, you should create your own rubric.
  • Collect stakeholder feedback: Stakeholder feedback gives you an idea of how each stakeholder group is experiencing your program. Groups to engage include program staff, clients, families of clients and staff from partner programs/organizations. You can use interviews, surveys, and focus groups to collect their feedback. These methods should not focus on your clients’ outcomes, but on their experience in the program. This will include their understanding of the program goals, structure, implementation, operating procedures and other program implementation components.

In our evaluation project with the Wabash YMCA’s 21 Century Community Learning Center, we used a combination of the methods described above. Our staff observed each program site using a guiding rubric. Our team collaborated beforehand to make sure they had a consistent understanding of what components to look for during observations. We also collected stakeholder feedback by conducting surveys with students, parents and teachers. The content of these surveys focused on their experiences and knowledge of the program. After the program was complete, we reviewed documentation, including attendance records and program demographic information.

WHERE TO CONDUCT A PROCESS EVALUATION

You should conduct a process evaluation wherever the program takes place. To capture an accurate picture of implementation, an evaluator needs to see how the program operates in the usual program environment. It is important to assess the implementation in all program environments. For example, if a program is being implemented at four different sites, you should assess the implementation at each site.

In our evaluation project with the Wabash YMCA, we assessed the program implementation at three different school sites. This involved physically observing the program at each site as well as reviewing records and documentation from each site. Being in the physical environment allowed us to assess which procedures were used consistently among sites. It also helped us identify program components that needed improvement.

WHEN TO CONDUCT A PROCESS EVALUATION

An organization can conduct a process evaluation at any time, but here are a few examples of times when its use would be most beneficial:

  • A few months to a year after starting a new program, you can conduct a process evaluation to assess how well your staff followed the implementation plan.
  • When you’re thinking about making a change to a program, a process evaluation will help you determine in what program areas you need to make changes.
  • If your program is not doing well, conduct a process evaluation to see if something in your process is interfering with program success.
  • When your program is doing well, conduct a process evaluation to see what in your process is making it successful.
  • If you’ve had issues with staff turnover, conducting a process evaluation can help identify gaps in staff training, professional development and ongoing support that may be contributing to the turnover rate.

To determine when to conduct a process evaluation, it is also important to consider the capacity of your organization. Make sure that your staff will have enough time to devote to the evaluation. Even when using an external evaluator, staff may need to spend extra time meeting with evaluators or participating in focus groups/interviews.

We conducted our evaluation with the Wabash YMCA at the end of their first year of program implementation. Evaluating their first year of implementation allows us to provide them with recommendations on how to improve the program’s implementation in future years. We will conduct a similar evaluation during the next three subsequent years to track their operations and processes over time.

If your organization needs support in conducting a process evaluation, contact us today to learn more about our evaluation services!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

5 W’s of a Process Evaluation: Part 1

When it comes to program evaluation, people often think of evaluating the effectiveness and outcomes of their program. They may not think about evaluating how the program was administered or delivered, which may affect the program outcomes. There are several types of valuable evaluations that do not focus on outcomes. One type of evaluation, called “process or formative evaluation”, assesses how a program is being implemented.

In this two part blog series, we are going to cover the 5 W’s of a Process Evaluation:

  1. Why conduct a process evaluation
  2. Who should conduct a process evaluation
  3. What methods to use to conduct a process evaluation
  4. Where to conduct a process evaluation
  5. When to conduct a process evaluation

In this first blog in the series we will cover the first two W’s. The next blog will discuss the other three.

WHY CONDUCT A PROCESS EVALUATION

Let’s start with the “why”. A process evaluation helps an organization better understand how their program is functioning and operating. Process evaluations also serve as an accountability measure and can answer key questions, such as:Screen Shot 2018-07-27 at 4.38.23 PM

  • Is the program operating as it was designed and intended?
  • Is the current implementation adhering to program fidelity?
  • Is the program being implemented consistently across multiple sites and staff, if applicable?
  • What type and frequency of services are provided?
  • What program procedures are followed?
  • Is the program serving its targeted population?

It is important to determine what you want to learn from your process evaluation. Maybe you want to assess if the program is being implemented as it was intended or you want to know if the program model is being followed. Whatever the reason, you want to be clear about why you are completing the process evaluation and what you hope to learn.

We are currently working with the Wabash YMCA’s 21st Century Community Learning Center to evaluate their program implementation. Each center is required to work with an external evaluator to conduct a process evaluation. Here is what we hope to learn and the why of this evaluation:

  1. The evaluation will assess if the program has been implemented as it was intended and if it is adhering to state standards;
  2. This evaluation will capture the population served through the assessment of attendance trends;
  3. The findings from the process evaluation will be used for program improvement in subsequent years.

WHO SHOULD CONDUCT YOUR PROCESS EVALUATION

When determining who will conduct your process evaluation, you have the option of either identifying an internal staff member (e.g., program manager or quality assurance) from your organization or hiring an external evaluator. Many organizations find that there are challenges with an internal team member: they may not be objective, they don’t have a fresh perspective, and they often have other job responsibilities beyond the evaluation.

For the reasons mentioned above, it is beneficial to have an external evaluator (like TCG!). An external evaluator will be able to assess the operations of your program from an unbiased lens. This is especially helpful if a program has multiple sites. An external evaluator can assess all sites/facilitators for consistency more objectively than a program staff member. (If you’re interested in learning more about how to evaluate multi-site programs, view our blog post here!).

In our evaluation project with the Wabash YMCA, the decision to conduct an evaluation with an external group was made by their funders. This decision ensures that the evaluation is high quality and objective.

The other three W’s will be discussed in a later blog post, so stay tuned! In the meantime, contact us today to learn more about our evaluation services!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

4 Steps to Create a Dashboard

In today’s information age, organizations are overwhelmed with the amount of information that they collect, track and monitor. Non-profit leaders must decipher all the data to determine what is meaningful and relevant to share with staff, funders, Boards of Directors and other community partners. A dashboard is a great tool to bring all the critical elements together in a user-friendly report.

Through our program evaluation and research and analysis services, we help organizations create dashboards. Here are four steps to create an effective dashboard:

  1. Determine the audience for the dashboard. A dashboard is customized for the audience meant to view and use the dashboard, so first an organization needs to determine the intended audience. Then an organization needs to determine the key takeaways that you want the targeted audience to get from this dashboard. 0Lastly, the organization should really focus on the information that is most important and relevant for this audience.
  1. Decide on what the dashboard is tracking now that the audience is determined. A dashboard is meant to communicate progress over time, such as monthly, quarterly or annually. In addition, data in the dashboard can be used to compare different data sets, such as geographic locations, sites or populations. These factors need to be determined to provide the appropriate context for decision makers.
  1. Determine the visuals that will be most effective in communicating the message. In most cases, we work to fit a dashboard on one page. This does not provide much “real estate”, so you must be intentional about the visuals used to grab the audience’s attention and display the key messages [Sidebar: this is why we use Tableau!]. A dashboard does not have much room for wording and explanation.
  1. Determine the delivery of the dashboard. In most cases, dashboards are “static” or print reports that are shared via handouts or electronically. However, with the growing development of software programs, more interactive dashboards are being created. In some cases, there may be value in creating both a static dashboard that is completed annually and an interactive dashboard that is updated real-time.

Dashboard Blog imageAs we shared in this blog post about creating a needs assessment and annual report, we mentioned the state dashboard and county profile that we created for ELAC. After conducting the first state needs assessment on young children, ELAC realized that the amount of data and information was overwhelming. ELAC was inspired by the Annie E. Casey Foundation’s 16 indicator dashboard in their annual data book and used this to create ELAC’s dashboard.

Following the four steps above and inspired by the Annie E. Casey Foundation’s dashboard, we worked with ELAC to create a state dashboard that:

  • Identifies four focus areas related to children and families, high-quality care programs, education workforce, and kindergarten readiness with 16 key indicators.
  • Compares progress over time (dependent upon the release of new/ old data)
  • Uses arrows (a visual tool) to depict if the numbers represent improvement (arrow goes up) or worsening (arrow goes down).

After a few years of creating the ELAC state dashboard, communities across the state of Indiana were asking for this same data at the local level. We worked with ELAC and our state data partners to gather the county-level information to create a two-page county dashboard and profile. The first page of the county profile mirrors the state dashboard with a few exceptions.

Instead of comparing progress over time, the county dashboards compare the county data to the state data.  Following steps #1 and #2 above, we focused on the audience for the county dashboards who said that having their data in the context of the state data would be helpful to know if they are doing better or worse.  Therefore, that ranked as a higher priority than comparing their data over time.

The second page of the ELAC county dashboard was new and provided the opportunity to add visuals (charts and graphs) to depPicture1ict the key findings in the full narrative report. The visuals help to communicate complex information in simple charts.

Using a data visualization software program like Tableau is critical to not only help make the dashboards visually appealing but also to automate the process. In this case, we created 93 unique dashboards for the state and all 92 counties. While the ELAC dashboards are currently only static reports, there is the option and feature (with Tableau) of making them interactive like the Indiana Commission of Higher Education’s College Readiness Dashboard. One of our good friends, the M.A. Rooney Foundation, has also been working to transpose K-12 data for schools and community partners into meaningful dashboards.

Are you ready to get started in creating a dashboard for your organization? We would love to work with you to help you focus in on the key indicators important for your organization and create a dashboard that informs decision-making! Contact us today for help.

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

Putting Data into Context

At Transform Consulting Group, we are proud data nerds. Through our evaluation services, we help clients collect, analyze, and share meaningful data. In this blog post, we explained who to share your data with and why. In today’s post, we will go one step further by providing tips on how to present your data in a meaningful way. More specifically, we’ll discuss how to put your data in context and why it is important to do so.

Impact Image- blogWhen presenting your data, you shouldn’t share it in isolation. For example, an after school tutoring program might find that 75% of their students pass their required standardized tests. If the program shared this data point by itself, their audience might have a lot of unanswered questions, like:

  • How does this pass rate compare to other students who don’t receive tutoring services?
  • How does this rate compare to local and national data?
  • What standardized tests is the statistic referring to?

 

To avoid this problem and present their data it a meaningful way, it would be best for the tutoring program to cite outside data sources to provide comparison, credibility, and context. By including this additional information, the program could more fully illustrate their impact and outcomes.

We are currently working with the Center for Leadership Development to develop an evaluation plan. Through this process, we have helped them demonstrate their impact by presenting their data within context. Here are three tips we shared with them that can also help you use outside data sources to put your data into context.

1. Find credible data sources that add meaning to your data.

When citing outside data, it’s important to make sure the data is credible, accurate, and relevant to your organization’s work. When working with clients like CLD, we often provide a resource sheet listing different data sources they can cite for comparison and context. An example of a data source we shared with CLD is the Indiana Commission for Higher Education’s College Readiness Dashboard. This was an appropriate choice because it is a reliable interactive data set that can be used to compare the outcomes CLD students experience to other students in their state and county in similar demographic groups. Check out this blog post for a list of our go-to data sources. This list may help you identify which data sources you can cite to move your organization forward.

2. Benchmark similar programs.

In a previous blog post, we explained that you may want to benchmark the practices of organizations similar to yours when making a programmatic change or looking to diversify your funding. Benchmarking can also be helpful when creating an evaluation plan and reporting your data. Looking at the outcomes of similar programs gives you comparable data to assess your program’s efficacy.

When working with CLD, we benchmarked similar programs such as College Bound in St. Louis. Their programming aims to help low-income students get into and graduate from college. Not only were they a similar program for CLD to compare their outcomes to, but they are also a great example of an organization who puts their data into context to make it more meaningful. For example, they compare their data to St. Louis Public School data and low-income students across the nation:

94% of College Bound students have matriculated to college immediately after high school, compared to 66% of St. Louis Public School graduates and only 51% of low-income graduates nationwide.

By presenting this statistic in the context of the students’ school system and other low-income students, College Bound is displaying the impact they are having and the success of their students relative to their peers.

3. Make sure you’re comparing apples to apples.

We always tell clients to make sure they’re not trying to compare apples to oranges. This phrase refers to the comparison of items that aren’t really comparable. An example of this came up in our work with CLD when reporting their alumni’s postsecondary persistence rates. When comparing their persistence data to local and national data, we needed to make sure the outside data set was defining persistence in the same way they were. They define it as persisting from Freshman to Sophomore year of college. Other sources defined persistent students as those who were enrolled at any institution or had attained a degree 3 years after first enrolling. Therefore, these two data points aren’t really talking about the same thing and aren’t comparable. By finding the right data sources to compare your data to, you ensure that the data and context is meaningful.

If you need help presenting your data in a meaningful way and using it to make data-informed decisions, give us a call to see how we can help through this process!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

3 Steps to Establish Clear Outcomes

Evaluation is key in determining if your program is making the desired impact. While critical, evaluation can be an overwhelming and intimidating process for organizations. We have worked with several clients to help them embark on the journey of evaluating their program(s). At Transform Consulting Group, we follow a four-step evaluation process. The first step of establishing clear outcomes can be one of the most difficult. You know what your mission is and you know your vision for a better community, but how do these translate into measurable outcomes?

4 eval steps

 

1. Establish clear outcomes

2.  Create or modify data tools and system

3. Analyze the data

4. Use data to make informed decisions

 

Outputs vs. Outcomes

When determining outcomes, the conversation usually starts with program outputs. Outputs are what your program produces: activities, services and participants. Tracking, analyzing and reporting your program outputs is a valuable way of displaying an organization’s work! For example, let’s say an after-school tutoring program served 650 students during the 2017-2018 school year. You could further break that number down by age and frequency of services:

Age group Session Frequency Number of participants Total number of sessions provided
3rd-5th grades Weekly for 10 weeks 320 320×10=3,200
6th-8th grades Weekly for 15 weeks 330 330×15=4,950
Total tutoring sessions provided= 8,150

With a few simple calculations, we have a powerful representation of the work this tutoring team has accomplished! However, outputs alone don’t display programmatic impact.

Outcomes go one more step in showing impact. Outcomes are the changes in knowledge or behavior that you want your clients to experience as a result of your program. They are the “so what” of your services and activities. There are three levels of outcomes that you want to set and measure:

  1. Short-term: What changes in knowledge, attitude or behavior do you want to see in your clients by the time they complete your program or service?
  2. Intermediate: What changes do you want to see in client knowledge, attitude or behavior 6 months-12 months following program completion?
  3. Long-term: What changes do you want to see in client knowledge, attitude or behavior 1+ years after program completion?

IMG_5774

We recently worked with the Center for Leadership Development (CLD) to develop short-term, intermediate and long-term outcomes. They are focused on helping get more students of color to and through postsecondary education. Here are three steps that we used to help them establish clear outcomes that assess the impact of their organization.

1. Align to Organizational Mission and Purpose

When you set outcomes, you want to make sure that they align with your organizational mission and benchmarks. CLD’s programming and organizational benchmarks are centered around five principles for success: character development, educational excellence, leadership effectiveness, community service, and career achievement. We helped them establish several outcomes that aligned with their programs, missions, and key principles. 

2. Review Funder’s Priorities 

When receiving grant funding or large donations, organizations often make commitments about what they will accomplish with those funds. Therefore, you want to make sure that future outcomes still align with your current funding priorities and commitments. We worked with CLD to make sure that their many outcomes aligned with the commitments they had made with their current funders.

3. Develop SMART Outcomes

When working with clients to develop outcomes, we follow the “SMART” rubric. We plan to write a full blog to go more in-depth about the SMART rubric, but for now the main takeaway is that they are specific, measurable, achievable, relevant and timely.

One of CLD’s long-term desired outcomes is for 75% of their participants to earn a bachelor’s degree or credential within six years of high school graduation. This outcome aligns perfectly with their mission and funding commitments, but is it SMART? Let’s check!

Copy of Establishing Clear Outcomes draft (2)With their clear outcomes established, CLD now has a road map of where they want their participants to go. This road map not only helps CLD stay on course, but it also helps to paint a picture of their desired impact for their funders and supporters. Now they are ready to move on to the next step of their evaluation: Creating or modifying data tools and systems!

If you’re ready to evaluate your program, but are hesitant to take the first step, contact us today!

 

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

How Are Indiana’s Youngest Children Doing? The 2018 ELAC Annual Report Gives Insight.

Indiana’s Early Learning Advisory Committee (ELAC) just released its 2018 Annual Report—the fifth since ELAC’s inception in 2013. Annually, ELAC completes a needs assessment for the state’s early learning system and recommends solutions. The goal is to baseline where Indiana is using key indicators and to make best practice recommendations to address the gaps. The result of this year’s annual needs assessment is three key reports and tools: 

ELAC’s seven appointed members work alongside 150 workgroup volunteers who focus on different aspects of the state’s early learning system. All this energy centers on providing early childhood care and education that is accessible, high-quality, and affordable to all families.

Screen Shot 2018-01-05 at 11.40.14 AM

How Are Children Ages 0-5 Doing Today?

  • Of the 506,761 children in Indiana ages 0-5, 65% need care because all parents are working. This includes working parents who are single as well as households where both parents work outside the home.Figure 3
  • Of those children who need care, only 41% are enrolled in known programs. The other three-fifths of children are in informal care settings—with a relative, friend, or neighbor—where the quality of care is unknown.
  • Of the young children who need care, only 15% are enrolled in high-quality programs. A high-quality program not only ensures that children are safe, but also supports their cognitive, physical, and social-emotional development for kindergarten readiness and beyond.

What Are Some Of Indiana’s Accomplishments On Behalf Of Young Children?Figure 15

  • There are more high-quality early childhood care and education programs available. In 2012, Indiana had just over 700 high-quality programs. There are now almost 1,200.
  • Today there are 4.5 times more children enrolled in high-quality programs than there were five years ago.
  • Over half of the counties increased their number of high-quality programs.

What Is The Unmet Need Identified In The 2018 ELAC Annual Report?

  • There are communities in Indiana with no high-quality programs.
  • The tuition cost of high-quality early childhood care and education programs is unaffordable, and the available financial assistance for low-income families is  insufficient.
  • There is a lack of high-quality seats for infants. Only 7% of children ages 0-5 in high-quality programs are infants. Tuition Comparison

How Can I Find Out More?

  • As in past years, ELAC has published a full annual report, which includes statewide data on Indiana.
  • ELAC has also compiled updated 2018 county-level data for all 92 Indiana counties to aid local stakeholders and coalitions in their work. Use the map to select your county. You can review your county’s profile in an interactive dashboard or a PDF report!
  • There is a newly created feature this year! ELAC published an interactive dashboard with all of the data in the annual report—allowing you to learn more about specific data points and easily present data to stakeholders. There are also comparisons between counties to see how well your community is doing compared to others.

Transform Consulting Group is proud to support ELAC’s work to help each of our youngest learners reach their full potential!

Transform Consulting Group can also help your organization or coalition with data analysis, creating dashboards to visualize your data, and meaningful reporting. Contact us to multiply your impact!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

Why is a Program Evaluation Impact Team so Important?

Impact Image- blogHow does an organization know it’s meeting its goals and objectives?  An outside team can be hired to put tools and systems in place, which is a good start, but investing money in outside consulting only can leave organizations floundering in a constantly evolving reality.  What if instead of investing solely in an outside group, an organization invests simultaneously in its own people?  It is exceedingly important to invest in the right team of people to help an organization to push forward, to evaluate and sustain systems, while keeping a “pulse” on the organization.  That’s what an impact team does.     

In a past blog, we talk about the 4 steps of a program evaluation. The impact team would work closely with the outside evaluator to complete the four steps and keep the evaluation cycle going beyond the consultation engagement! They are essentially your internal “CQI” or continuous quality improvement team.

An impact team is a cross-cutting team of staff that come together on a regular basis (e.g., quarterly) to ask critical questions, review data, and make meaning of the information; basically, to integrate program evaluation into the organization.  They would discuss how the data is being collected to ensure the processes and systems in place are being followed. They would also review the big goals identified in the logic model and discuss if the targets have been met and why/ why not.  Lastly, and most importantly, they would identify data-informed recommendations to improve the outcomes.

Transform Consulting Group worked closely with Center for Leadership and Development to implement a system for in-house evaluation and train their impact team to ensure integration of evaluation within the organization and cross-department alignment. Policies and procedures were put into place to guide system processes and outline impact team member roles and responsibilities. The team was trained on how to collect data using the agreed-upon tools.  A data management plan and schedule were created to align with the organization’s programming schedule and keep the impact team on track with data collection and analysis throughput the year.  This ensured new evaluation results would always be ready in accordance with important fundraising events and annual strategic planning sessions.

Who should be on an impact team?  The executive decision-makers?  All management?  Maybe just the staff who worked on the programs being evaluated?  Only if you want a myopic view of your organization’s reach.  The impact team should consist of staff at all levels to have different perspectives.  Information will be interpreted differently by each person on the team based on their unique capabilities, experiences and strengths.

What is interesting?  What stands out?  Some results may be more obvious to different members of the team.  This is what makes a great impact team.  Take time to think about the best people at each level of the organization and don’t be afraid to adjust those involved as things change.  An impact team will be engaged in the most crucial elements of the organization.  Make sure to include members capable of critical thinking and connecting the dots—systems level thinkers, not just task-masters.  Those who can perform high-level analysis, problem solving, and decision making are essential, but don’t exclude those doing work on the ground floor.  Their understanding of what it takes to execute the organization’s vision on a daily basis is invaluable.  Build a team from all levels that is excited, engaged, and willing to be honest about what works and what doesn’t.

This is the start of where the big decisions are made.  The informational “tools” to make big decisions start with the evaluation data (the hard evidence) and the folks who can interpret what the data is saying.  An organization’s impact team would present evaluation results and subsequent recommendations to the board and leadership team regularly.  Organizations are equipped with make data-informed recommendations for decision making, such as modifying programming elements, letting go of a program, refining the target population, requiring more professional development for program staff, adjusting program dosage for participants, and other organizational or programmatic changes.

Transform Consulting Group can help you create and sustain a winning program evaluation impact team for your organization.  Please contact us today to learn more!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

You Finally Have Data! Who Should You Share It With and Why?

Data-BlogImagine your organization recently conducted a program evaluation.  Data was gathered from one of the programs offered and an analysis of the data revealed your program outcomes were met, some even exceeded!  This is exciting news and you want to shout it out from the nearest rooftop… We understand this urge, but there are better (and safer) ways to share your evaluation data.  In this blog, we discuss different audiences with whom an organization might share their evaluation results and the benefits of sharing them. Stay tuned for a future post discussing how to share your evaluation data.

At Transform Consulting Group, we love seeing organizations use data to measure and achieve success!  We also love helping organizations “visualize” success in easy-to-understand ways using simple charts and graphics.  Once these data visuals are audience-ready, what are the next steps?  Does an organization need to be intentional about sharing program evaluation data?  Why?

Data is only as good as how it is used.  If an organization collects data, analyzes it and makes beautiful visuals with it, but few people actually see the results, what is gained from the experience?  Probably not much!  This connects back to a blog we wrote called, 4 Steps to Complete a Program Evaluation.  Step number four in the process is to discuss the results of an evaluation and make data-informed decisions.  In order to do this, evaluation data must be shared with the right people.  

Our team typically compile a summary report and/or slide presentation of an organization’s evaluation data for stakeholders to review the results and discuss their implications.  A stakeholder is anyone who has a “stake”—an interest, concern, or investment—in an organization and/or program achieving (or not achieving) its goals.   We categorize data sharing into two different “buckets” -internal and external.  As it sounds, sharing data internally is with folks on the inside of an organization, and external data sharing is with folks on the outside looking in. Within each bucket, we’ve identified three important stakeholder groups and how sharing evaluation results can be beneficial, no matter what the data “says”.

Internal Stakeholders

1. Employees:  It would be challenging to find a dedicated employee who would not care to see evidence of their daily efforts actually paying off and leading to positive change.  The achievement of program outcomes reflected in data can help validate the combined efforts of all staff involved in a program’s design (if applicable) and delivery.  It is important to share evaluation results with staff at all levels and not just employees at the top.  Too often, we see that the information does not trickle down.  Sharing, showcasing and celebrating success builds morale and encourages staff to continue doing great work.  In the business sector, this would be akin to celebrating a top-sales month! 

On the other hand, results of an evaluation may reveal that a program is not achieving its intended outcomes and uncover potential reasons why not.  Evaluation data can “shed light” on issues that staff and/or leadership were not even aware of that could hinder the ability to make an impact.  Based on sharing these results, employees will know to expect workflow or programmatic changes in the near future without confusion or surprise.

At TCG, we recently worked with an organization that provides college and career readiness counseling for high-school students to evaluate their 8-week summer program.  During orientation, students were given a pre-survey to assess their knowledge of college and career readiness subject matter before participating in the program.  On the last day, students were given a post-survey to measure knowledge gained as a result of completing the program.  Evaluation data was presented to program staff and leadership in easy to understand charts and graphs.  Staff became excited when they saw how much college and career readiness knowledge students gained as a result of participating in the program!  They were also glad to find out which areas students reported knowing the least about.  As a result, staff could strengthen those areas of the program prior to the next round of incoming students.  

2. Volunteers:  Organizations that depend on volunteer work are always on the lookout for more help.  Unfortunately, unpaid labor can be hard to come by, especially if volunteers don’t feel like their time and effort is making a difference.  If an organization can show community impact as a result of volunteerism, their volunteers are more likely to feel validated and remain committed to the organization.  Then they may even recruit more volunteers!  In this case, evaluation data helps to promote satisfaction and the feeling of reward.  Volunteers are motivated to stay put and not seek out other volunteer opportunities when they know that their time and talent is making a difference.

3. Board of Directors:  The Board can use evaluation data to begin planning and discussing the future of an organization.  If the data supports outcomes consistently being met, a Board may decide to expand the program’s service delivery to a larger area or broader audience.  If the expansion is successful, the organization sustains a larger community impact and the Board (hopefully) feels a sense of accomplishment and pride in the organization.

For evaluations resulting in unmet program outcomes, the Board may recommend program model changes, or commit to focus on a problem area (i.e. staff professional development) during the upcoming year.  Either way, sharing program evaluation results with Board members equips them to make informed decisions about what is best for the organization going forward.

External Stakeholders

1. Funders:  Funders want to see a return on their investment into any organization.  Many funders require organizations to conduct program evaluations to remain accountable for the results that their funds are directly supporting.  Bottom line, funders want to know if the program is worth the resources that it costs.  Evaluation results help “quantify” worth for funders by showcasing to what extent the desired program changes are occurring.  If a program performs as expected, it is likely that funding dollars will keep flowing.  

Alternatively, if evaluation data shows program outcomes not being met and insignificant or no change is occurring, a funder may decide not to invest or discontinue investing in an organization.  Funders are often supporting multiple organizations at once and want to feel confident their money is well-spent and producing the best results!

2. Partners:  Evaluation data may help make a case for two or more organizations to join forces and provide combined programs or services within a community to maximize impact.  Sharing data can be a positive step in the direction of collaborating and working towards common goals.  Local programs may unknowingly be competing for clients, resources or limited funding.  Some are likely struggling to meet goals.  Awareness of such issues, made apparent through evaluation data, could spark ideas to collaborate and leverage partnerships to provide joint programming.  The result?  Sharing data may lead organizations to do what is best for the community, while also doing what is best for their budget!  

Ideally, routine program evaluation and data sharing will keep organizations accountable to each other in the future.  On the other hand, evaluation data can show when partnerships are no longer working and should be realigned or dissolved.

3. The Public:  Sharing data with communities helps to legitimize an organization’s purpose in the public eye.  Program evaluations are one way to demonstrate community impact.  Evaluation data goes a step farther to show how much impact an organization or program is making.  Nonprofit organizations are public agencies with a responsibility to communicate back to the public about the goodwill that they are making from the public’s investment in them!

Foundations, grant makers and other funders research shared public data about an organization or program to determine whether or not to invest in it.  Evidence of successful programming can suggest an organization itself is well-managed.  This inspires confidence that funding dollars will also be used wisely and generate the greatest return on investment.

Internally, sharing evaluation results can galvanize an organization–arming employees (program staff and management) and the Board of Directors with data they need to ensure program outcomes are being met and take appropriate actions when issues arise.  External stakeholders use data to verify an organization’s credibility and hold it accountable to the outcomes it seeks to achieve.  Ultimately, stakeholders use data to keep an organization on track to accomplish its overall goals.

It’s hard to win a game if your team has no idea of the score—or even worse, what game they are playing.  Sharing evaluation data helps to keep an organization’s impact transparent and everyone involved on the same page.  Stay tuned for our next blog in this series about how to share data and use it to tell your story within different communication channels!  Talk with one of our team members today and learn how you can get the word out about your evaluation results!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin

5 Ways to Use Your Program Evaluation Data

Your organization or school has spent a significant amount of time, money and resources on collecting, tracking and analyzing important data about your programs and services. In working with our clients, we often find that many are not doing a good job of sharing this information beyond the traditional annual report. Here are some simple, yet effective ways to use your program evaluation data:

  1. Annual Report – At a minimum, you should annually produce a report that summarizes your organization’s impact in the community. However, we strongly encourage you to rethink your traditional annual report. Check out these olds posts (here and here) for some inspiration!
  2. Email – The signature line in your staff’s email is a great communication tool. Think of all of the collective emails that your staff send out daily, weekly and yearly and the potential reach of those emails. Use the signature line to highlight key successes, which can be updated monthly. For example, if you operate a tutoring program this quick line could be added to all staff signature: “85% of participating students in ABC program increased their reading level by 3 months during our five week summer program.”
  3. Social Media – Similar to creating “data sound bites” in your email signature, similar data posts can be created for your organization’s social media pages on Facebook and Twitter. Just make sure to limit your jargon and make the post user-friendly.
  4. Collateral Materials – Too often, organization’s marketing materials focus on the services and programs (what you do) and not the result of those services and programs (aka, your outcomes!). Refresh your marketing materials to include both of these critical items – the programs offered and the impact that these programs have in the community!
  5. Grants and Fundraising – One of the best ways to increase an organization’s revenue and funding is to share the results with your funders via grant proposals, grant reports and fundraising events. The evaluation data can be useful to both highlight the great work you are doing (aka – give us more money to expand our impact) as well as justify the need for more money (aka – we need better staffing, curriculum, etc. to accomplish our goals).

Transform Consulting Group is passionate about helping organizations get clear on their mission and goals as well as have the right tools and systems in place to monitor the accomplishment of your goals. Do you need help evaluating your programs or communicating your impact? Contact us today for a free consultation!

Share this article:Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Email this to someone
email
Share on LinkedIn
Linkedin