Skip to main content

An Athlete Performance Management Process

Taking an athlete as the input, we have a process that seeks to get the very best results possible at target events by managing improvements in the athlete's performance. Sounds simple, right?

Figure 1: idef0 context diagram

Well actually, at the highest level of abstraction it is. There is a broad consensus regarding how this management process works; placing the athlete at centre of the process, supported by coaches and specialists (who might also be the athlete, self taught).

In fact this overall approach is summarised brilliantly by Emma Ross from the EIS, you could call it an operating model, or perhaps a development philosophy:

We place the athlete at the centre of the process, which is led by the coach. They are ably supported by practitioners (specialists) who bring the latest and best research working as a team, all focused on performance.

End-to-End Process

Figure 2: Athlete performance management process (APM)

In very blunt terms the development process can be summarised;


First, we will take the athlete and onboard him, this is a baselining process where we create an athlete bio. We get to know him and his current status so we can plan the season and track improvements or lack of them.

Once we've done that we can develop a season plan which takes into account what we learned about him, his development goals for the season and the target events that he might compete in. On first pass, this will be our baseline plan.

From here, the athlete will need to execute the plan, whether that's training, racing or tests. Along the way performance measures and athlete feedback is collected.

Coaches and specialists will assess the athlete's performance to make sure the athlete is working, the plan is working and the improvements are occurring in a sustainable way. If something is wrong we need to escalate, raise a flag to say we need to do something about this.

Of course, every week or month, or both, the athlete and coaches will reflect on how things are going. Best practice here is to place the athlete at the centre of this and work from there. This reflection might result in wanting to change the plan, we already learned that those elite coaches always assume the plan is wrong.

Indeed, at the end of every training block and at the end of the season, that reflection will be more retrospective, focusing on how the process is working as much as the athlete.

The reason we reflect so much is to make sure everything is working well. If something isn't working then we need to adapt. That might mean the athlete needs to up their game, it might mean the goals and targets might need to be less ambitious, it might mean we need to put more hours in. But either way, the plan needs to change.

On-boarding (Baselining)

Figure 3: On-boarding an athlete

The single most important thing any coach must do is listen to the athlete. Closely. With intent. The second is build trust, in both directions. So it follows that with a new athlete, or start of a new season the onboard (or baselining) process begins with a discussion, an interview.

The interview builds up the athlete profile telling us about their history and goals etc. But we also need to baseline the other part of their bio; their current performance levels, the athlete status.

An on-boarding plan typically last 30 days, but could be as short as a couple of weeks. It is a plan focused on measuring the current performance of the athlete. Combining the profile and status together we have the athlete bio.

Part of the on-boarding activity is an induction for the new athlete, understanding the ways of working with the new programme or coaching team, new approach, science, tools and so on. This period is also educational for the athlete with initial reviews and retrospectives used to learn the process before executing it for real.

At the end of the on-boarding process the team can review and determine the development goals for the season and possibly the kind of events the athlete might want to target -- all of which can be used to inform season planning.


Figure 4: Season planning

When we plan we are working at 4 levels, from scheduling the season competitions down to scheduling daily workouts. At level 0 we are largely working with the coach deciding what the season schedule will look like, there is no real specialisation at this point, but from L1-L4 the specialists will be planning for their domain of expertise; physiology, strength and conditioning, psychology, equipment and so on.

At Level 0 we are selecting events and prioritising them, A race, B race etc. Some races might be for development, some for simulation and assessment and some to win, we might also add training camps or similar here too. These are season milestones. They drive all aspects of the plan. Remember; the entire process is about optimizing event results, we need to be performance focused.

At Level 1 we are profiling events, considering in what condition, with what performance levels, skills etc the athlete will need to be able to win or meet the event goals. We are effectively setting smart targets for development; for example, on the first day of the Tour I will be rested with a CP of 360 or better and my weight should be 69kg (carrying 2kg).

At Level 2 and Level 3 we are developing a plan to meet L1 targets. This is where impulse response models, neural nets, coaching expertise come into play; developing macro "load" plans and optionally micro "workout" plans and weekly priorities or "themes". Workouts might only planned be for the first couple of training blocks, or for some self-coached athletes not planned at all, just selected on-the-fly depending upon how they feel on the day.

Planning at L2 and L3 are effectively an optimization problem; determining the correct training schedule (and recovery), taking into account event requirements and dates, athlete constraints (training days, holidays etc) and coming up with the best possible plan. We will see this again in the adapt process.

Once all the specialists (remember a coach or athlete might also be a specialist) have developed their plans they need to be combined into a season plan. This sounds trivial but is in fact a bit of a scheduling task, making sure the assessments and hard work align and don't conflict with each other. Some of the issues here can be avoided by planning collaboratively.

Lastly, the plan needs to be published; put into diaries, systems, calendars or whatever mechanism works for the athlete and coaching team.


Figure 5: Execute

Throughout the season the athlete will perform planned workouts and record the data (power, hr etc). This data is saved as an activity and enriched with athlete feedback and sent to the coaches and specialists.

Some workouts may require substantial preparation; maybe its an assessment day, or a visit to the wind tunnel. In other instances, maybe just getting out the door is the only prep needed.


Figure 6: Assess activity in context
After the athlete has shared the activity (race, training, test etc) and their feedback the coach and specialists will assess it and return their feedback.

The assessment is obviously domain specific, but the analysis really asks the same questions; is the athlete working? is the plan working? is the plan sustainable?

There are many methods and tools in this space (GoldenCheetah says hi!), but fundamentally this is about deciding if an intervention is required-- kick up the backside / hug for the athlete, or do we need to replan?

The output of this process is feedback from the coaches and the possibility of an escalation to consider changing something.


Figure 7: Reflection and communication

Communication, collaboration, motivation and development are key responsibilities of the coach. Spending time with athletes to listen, discuss and provide validation and feedback are absolutely vital. Ask any decent coach and they will tell you this is the most important part of their job.

All coaches will have different personalities and styles, but they will have a schedule for interaction (and also do so in an adhoc manner too). There are a wide range of planned interactions, these will be some form of;
  • Reviews - weekly or monthly reviews with athletes are typical, although data collection might be performed by the athlete daily. These sessions focus on the athlete; how things are going, rest of life issues, motivation. If there are real issues and the plan isn't working then that may result in a planning escalation.
  • Retrospectives - typically after training blocks and definitely after a season or A race. The focus here is on outcomes and the development process; is it working, how can it be improved, do we need to adapt. Whilst of course the athlete is still at the centre, this is about making sure the athlete is continuing to grow and develop.
  • Escalations - when something isn't working or adverse events happen the escalation is a focused session to try and remove emotion and work out what the best way forward is. 
  • Reflection - Athletes, coaches and specialists will reflect on their own performance and development, feeding into their own development plans. In the athelte's case this can be more formalised with daily questionnaires (POMS et al) as well as a more personal diary or notes.
  • Standups - In elite sport it is important to make fast decisions, during multi-day events or critical periods of athlete development the coaches and specialists will work together to make decisions about the day ahead. In amateur sport this might be the self-coached athlete deciding whether to train or which type of workout to perform. 
Other interactions of course take place, for example before a big event there may be a briefing or strategy session-- these should be covered under the execution process (see A32 workout prep).


Figure 8: Adapt (aka replan)
Adapt is basically the same process as plan, albeit focusing on adjustments. So as you can see above, planning parameters are adjusted (goals, strategy and constraints) before replanning.

For some domains this might be a highly formalised process, for others it might be highly informal. But the main steps still apply, and can be summarised as; adjust outcomes to accept a new reality or change the plan to make the outcomes a reality.

The optimize plan process A64 above is likely to be subject to a lot more blogposts and activity. You have been warned.


This process model is still very high-level, and purposefully so. Whilst there are many different workflows, tools and techniques at the domain level (physiology, strength and conditioning, psychology et al) the overarching process is consistent.

However, unlike the sports performance management framework posted recently, this process is quite low-level. It has focused on managing an individual athlete and their development goals. It doesn't cover managing camps, developing a squad, team selection and squad rotation or other processes. Maybe that's for another blog (highly unlikely!).

If you have comments leave them below, feedback is very welcome, especially if it helps to improve the process model, and as ever the material posted here is available for your own use and abuse on google slides.

PS: If you are a process modeller or idef0 expert, my apologies for those times where I didn't show mechanisms or controls and invented a new notation (e.g. "<>" !?). Maybe we can fix that later.

Popular posts from this blog

W'bal its implementation and optimisation

So, the implementation of W'bal in GoldenCheetah has been a bit of a challenge. The Science I wanted to explain what we've done and how it works in this blog post, but realised that first I need to explain the science behind W'bal, W' and CP. W' and CP How hard can you go, in watts, for half an hour is going to be very different to how hard you can go for say, 20 seconds. And then thinking about how hard you can go for a very long time will be different again. But when it comes to reviewing and tracking changes in your performance and planning future workouts you quickly realise how useful it is to have a good understanding of your own limits. In 1965 two scientists Monod and Scherrer presented a ‘Critical Power Model’ where the Critical Power of a muscle is defined as ‘the maximum rate of work that it can keep up for a very long time without fatigue’. They also proposed an ‘energy store’ (later to be termed W’, pronounced double-ewe-prime) that represente

Implementing the Banister Impulse-Response Model in GoldenCheetah

Over January 2019 I implemented the Banister model in GoldenCheetah, along the way I learned a little about its strengths and weaknesses. This post is about that; explaining the Banister model and how it relates to the PMC , how it has been implemented in GoldenCheetah and what it's limitations are. I've also added a bit at the end covering some of the things I'm looking to do with this next from potential model improvements through to deep learning. In some ways this post is a longer written form of this tutorial I recorded covering Banister and GoldenCheetah. The Banister Impulse Response model In 1975 Eric Banister proposed an impulse-response model that could be used to correlate past training with changes in performance in order to predict future improvements from future training. Originally proposed for working with collegiate swimmers it was reworked in 1990 for working with running and of course also applicable for cycling. Each type of sport needed a w

Performance Tests and Power Index

In this post I'm going to describe a new metric Power Index , which is used to find maximal efforts in general data without any knowledge of an athlete and a Submaximal Effort Filtering Algorithm using a modified form of a convex hull search algorithm. These approaches were developed to support the implementation of the Banister IR model in GoldenCheetah . Performance testing is useful When it comes to tracking and modelling performance you can't beat a performance test. Whether it's a set of tests to exhaustion, a 40km TT or just tracking your time up a local hill, they're really useful. If your performance improves then you know your training is working, if your performance goes down then maybe you're fatigued or losing fitness with too little training. Either way you have concrete evidence. And it should be no surprise that performance tests are the main inputs into most performance models. If we want to model and predict maximal performance, generally we n