Thursday, 14 July 2016

Test-Infected Developers

This article was originally published in the June edition of Testing Trapeze

At my workplace there is a culture of shared ownership in software delivery. We develop our products in cross-functional agile teams who work together to achieve a common business goal. However it’s still relatively rare for specialists to be proactive about picking up work in areas outside of their own discipline. For example, you don’t often see business analysts seeking out test execution tasks and prioritising those above work to refine stories in the product backlog.

That said, I’ve recently noticed an increase in the number of developers who are voluntarily engaging in test-related activities. They’re not jumping forward to think about test planning or getting excited about exploring the application. But they are diving into our automation by helping the testers to improve the coverage it provides, or working to enhance the framework on which our tests run.

As a coach part of my role is to foster cross-discipline collaboration. I confess that I haven’t been putting any active focus on the relationships between developers and testers. It is something that has changed as a byproduct of other activities that I’ve been part of. I’ve been reflecting on what’s behind this shift and the reasons why I believe the developers are getting more involved.

Better Test Code

In the past our test code has occupied a dark corner of our collective psyche. Everyone knows that it is there, but most people don’t want to engage with it directly. In particular, I have felt that developers were reluctant to get involved in a code base that was littered with poor coding practices and questionable implementation decisions. In instances where a developer did contribute, it was often a cause of frustration rather than satisfaction.

The test team have recently undertaken a lot of work to improve the quality of code that runs our automation. In one product we started the year with a major refactoring exercise that allowed us to run tests in parallel, which more than halved our execution time. In another we’ve launched a brand new suite with agreed coding standards from the beginning.

The experience for a developer who opens our automation is a lot less jarring than perhaps it has been in the past. As the skills of the testers improve, and the approach that we take to writing code becomes closely aligned with the way that developers are used to working, it’s no longer traumatic for a developer to delve into the test suites.

In addition, all of the changes to the test code now go through the same peer review process as the application code. We use pull requests to facilitate discussion on new code. There is a level of expectation: it’s not “just test code”. We want to write automation that is as maintainable as our application.

The developers have started to participate more in peer review of test code. There’s a two-way exchange of information in asking a developer to review the automation. The tester gains a lot of instruction on their coding practices. However the developer will also gain by having to completely understand the test coverage in order to offer their feedback on it.

Imperfect Test Framework

On the flip side of the previous point, there are still a number of very clear opportunities for enhancing our automation frameworks and extending the coverage that they offer. The testers don’t always have the capacity, skills or inclination to undertake this work.

I can think of a few occasions where a developer has been hooked into the test automation by an interesting problem in the supporting structure that needed a solution. Specific technical jobs like setting up an automated script for post-release database changes or tweaking configuration in the continuous integration builds. These tasks improve their understanding of the framework and may mean that the developer ends up contributing to the test code too.

Within the tests, there are application behaviours that are challenging to check automatically. Particularly in our JavaScript-heavy applications we often have to wait for different aspects of the screen to update during a sequence of user actions. Developers who contribute by writing the helper methods required for testing in these areas will often end up having a deeper understanding and closer involvement in all of the associated test code.

I believe the key here is providing specific tasks where the developers can engage in the test code with a clear purpose and feel a sense of accomplishment at their conclusion. In some instances, the developer will complete a single task then withdraw from the testing realm. In others, it’s a first step towards a deeper involvement in the test code and subsequently testing.

Embedded In Development

In almost every instance, a developer who is making a change to one of our applications will need raise a pull request to have their code merged back to our master branch for release. As part of the process enforced by our tools, the code to be merged must have passed all of our automated checks. Every change. All of the automation.

We’ve always run our automation regularly, but its relatively recent that it has it become mandated on every merge. This change has largely been driven by the developers themselves who are keen to improve the quality of code prior to testing the integrated code base.

Now that our automation runs many times per day it is in the best interests of the developers to be engaged in improving the framework. If it is unreliable or the tests are slow to execute, it has an immediate negative impact on the developers as they are unable to deliver changes to our applications. They want our automation to be robust and speedy.

The new build schedule has helped to flush out pain points in the test code and engaged a wider audience in fixing the root causes of these issues by necessity. Now most of the developers have experienced a failing build and had to personally debug one or more of the tests to fix the problem. The developers are actively monitoring test results and analysing failures, which means that they are a lot more familiar with the test code.

Conclusion

I see automation as a gateway to getting developers engaged in testing more broadly. When collaborating on coverage in automation, there is the opportunity to discuss the testing that will occur outside of the coded checks. The conversation about what to automate vs. what to explore is a valuable one for both disciplines to engage in.

We’ve taken three steps down the path to having our developers excited about picking up tasks in our test automation. We’ve made the suites a pleasant place to spend time by applying coding standards and ensuring that changes are peer reviewed. We’ve provided opportunities for developers to contribute to the framework or helper methods instead of asking them to write the tests themselves. And we’ve the automation in the development process to create a vested interest in rapid and reliable test execution.

Developers can become test-infected. I am seeing evidence of this in the collaborative environment that is continuing to evolve in my organisation.

Monday, 11 July 2016

A community discussion

A while back I put out a tweet request:


I spoke about the responses to this tweet during my talk titled "A Community Discussion" at Copenhagen Context. Somewhat ironically I've been reluctant to share the feedback that I received in writing. There's been exchanges in the testing community recently that makes me feel now is the time.

I had a lot of responses to my original request on Twitter. About half tried to explain context-driven testing rather than the community. Those who did speak about the people and environment gave responses like:
  • A bunch of supportive, challenging and engaged people full of questions, support and understanding.
  • Warm and welcoming, literally the best thing that I've come across in my career.
  • People who insist on a human perspective on testing
  • A community of people who constantly asks the question how can we be (test) better?
  • A group of people not restricted by a so called set of best practices and a one size fits all approach
  • A world-wide support network of people who share the same fundamental principles as me

I also had a lot of responses via private channels. Direct messages, email and skype. In many instances they were from people who no longer felt that they were part of the community. They gave responses like:
  • The Cult/Church of CDT due to the rhetoric used by CDT to describe their heroic and righteous fight against evil
  • The Test Police because they feel the need to correct the terminology and thinking of everyone else regardless of whether they share the same world-view.
  • They are an academic think-tank that is out of step with modern business needs
  • CDT is RST, it’s all just RST stuff, RST is the new best practice
  • If you don’t beat your drum to the CDT Rhythm they’ll beat you down hard
  • The Anti-ISO group, The Anti-ISTQB people, the Anti-anyone not CDT people etc. 
  • Not a safe place to share and explore

Are you surprised by this?

I was surprised by the stark polarity in what was shared openly and what was shared privately. I was surprised by who responded and who chose not to. I was surprised by specific individuals who held different opinions to what I had expected. However, I wasn't surprised to see these two views emerge.

What bothers me is that these two viewpoints seem to be a taboo topic to have a conversation about. 

On Twitter there has been activity that feels like warfare. Grenades are launched from both sides, loud voices shout at one another, misunderstandings create friendly fire, and when the smoke clears no one is sure what the outcome was. 

What I wanted to do in my talk at Copenhagen Context was start a dialog. I talked about an inclusive context-driven testing community by sharing the model I created almost two years ago. I suggested some ways in which we could alter our behaviour. I was part of an Open Season discussion where those present shared their views. 

Since then?

I continue to focus on making the New Zealand testing community as inclusive as possible. I believe that WeTestTesting Trapeze and even this blog are making a difference in spreading the ideas from the context-driven school without the labels. I strive to be approachable, humble and open to questions.

I hope that I am setting an example as someone making a positive difference through action. My personal role model in this space is Rosie Sherry, who is the "Boss Boss" at Ministry of Testing. I observe that she has her own style of quiet leadership and a practical approach to change.

But the wider conversation is still adversarial or hidden. I'd like to see that change.

What are your thoughts?

T-Shirt print from Made in Production

Sunday, 3 July 2016

Why we're switching to Selenium Grid

The department that I am part of has gone through a big growth spurt recently. When I started in my role, just over a year ago, there were 20 testers. Now there are 30. That jump is indicative of what has happened in all disciplines of software delivery.

This growth is starting to create some interesting problems in the execution of our test automation. In particular for our web-based retail banking application, which is a relatively young product that has had test automation embedded in the development approach since the very beginning.

Alongside a comprehensive unit test suite, we've been using Selenium WebDriver to execute tests against Firefox. We call these tests our "automated acceptance suite" (AAS) or "node tests", which is a reference to the mock server technology that these tests execute against.

In the beginning the application was small and the node tests that ran alongside it were quick. As the product has grown we've added more tests, so they take longer to execute. When the fast feedback provided by our automation was no longer fast enough, we switched our tests from single thread to parallel execution.

In the beginning there was just a single development team and the node tests ran every time that a change was made. As the number of teams has grown the number of changes being made has increased, so the tests are being executed more frequently. When our build queues started to exceed reasonable lengths, we switched from a dedicated continuous integration hardware to docker containers that increased the number of builds we could execute in parallel.

Our solution to problems introduced by growth has been to do more things at once.

To get the tests to run faster we switched the test implementation to parallel execution.

To get the build queues to be shorter we switched the infrastructure to parallel execution.

These were good solutions for us. But now we're coming to the point where we can't do any more things at once with what we have. To illustrate, compare what was running on our build server against what is running there now:


In the beginning we had dedicated hardware. It ran a node server to return mock responses, a web server for our product, and the tests that opened a single Firefox window to execute against.

In our current state we have four active docker containers. Each runs a node server, a web server, and the tests that open four Firefox windows to execute against.

In our current state we're hitting the limits of what our infrastructure can do. This is manifesting in two types of problem that are causing a lot of frustration as they fundamentally impact on two key measures for the usefulness of automation: speed and stability.

Our current state can be slow, particularly when there are four builds executing at once and the hardware is fully loaded. Our overnight build time is approximately 30 minutes. By contrast, when a build executes during business hours it takes approximately 50 minutes.

I find it easiest to explain why this happens using an analogy. Imagine a horse towing a cart with four large pumpkins in it. The horse can trot down the street quite happily, relatively unencumbered by its load. Now imagine the same horse towing a cart with 28 large pumpkins in it. The horse can still move the cart, but it won't be able to travel at the same pace that it did with a lighter load. It may trudge rather than trot.

Our overnight build is carried by the lightly loaded horse as it may be the only build active on our hardware. Our build during business hours is carried by the heavily-laden horse as many builds run at once. The time taken to complete a build alters accordingly.

The instability we've seen comes partly from this variable speed. There's a particular case where we look for a success notification that is only displayed for a fixed duration. When the timing to complete the action that triggers this notification is variable, it becomes frustrating to verify.

But we've also had stability problems with the four Firefox browsers running on a single display. Some failures are caused by tests running in parallel that fight for focus e.g. attempting to confirm a payment via a modal dialog. Others are attributed to two different tests that simultaneously attempt to hover and click the mouse e.g. editing an account image. When these clashes occur, one of the tests involved will usually fail.

Our operations team ran some diagnostics on the existing hardware to determine what made it slow. They identified which processes were chewing up the most system resources or the largest pumpkins on the cart. It turned out that there was a clear single culprit: Firefox.

Enter Selenium Grid.

Selenium Grid enables a distributed test execution environment. What this means in our case is that we can move all of the Firefox instances out of our docker containers. This will significantly lighten the load on our existing continuous integration infrastructure:



In the proposed future state, our tests will trigger to the Selenium Grid Hub on our cloud-based infrastructure. The hub will have connectivity to a pool of Selenium Grid Nodes. Instead of having multiple Firefox windows open on a single display, we're provisioning each node in a dedicated container with a single browser.

Each grid node will know where it was triggered from, as the browser will still open the web application that is running on the existing docker architecture. This does mean that we are introducing network latency into each of our WebDriver interactions, so they'll be slower than on local hardware. But the distributed architecture should give us enough advantages that we still end up with a faster solution overall.

Our hope is that this proposed future will address our existing speed and stability issues. Increasing the system resource available through the introduction of hardware should help us to get consistent build times, regardless of the time of day. And having each Firefox browser in its own dedicated container should avoid any display contention.

We have a working prototype for the proposed future state and early signs are promising. I'm looking forward to turning the vision into reality and hope that it will bring benefits that we are searching for.

Thursday, 23 June 2016

Launch Wrangling

Imagine being one of five testers in an organisation with over 400 developers. Picture a pace of 1,251 production deploys in a week. Now throw in a distributed workforce that communicates almost exclusively online.

Last night I attended Sheri Bigelow's talk at WeTest. Sheri works as a tester at Automattic in what I believe to be quite a unique environment. She shared some fascinating insights into building a testing culture in continuous delivery, in a team where testers are vastly outnumbered and testing is an optional activity.

Of all the things that Sheri spoke about there was one in particular that resonated. It's something that I can imagine applying in my own organisation, despite our vastly different contexts in developer:tester ratio, rate of release, and risk profile.

Launch Wrangling

Many people who develop software consider a release to be the end of their process. The idea that once a feature is in production, being used by the hands of customers, the development team can move on to their next piece of work.

When Sheri talked about deploying to production she said that it's "not the end of the game, it's kind of the middle". At Automattic the developers have work to do beyond creating the code itself. They are expected to monitor their changes in production and help to provide support to users when required. A true DevOps culture.

In the middle of a sports game, the players will usually take a half-time break. They'll have some refreshments, reflect on the game so far, take inspiration from their coach, then return to the competition.

In teams where deploying to production is halfway in the process there'll be a similar lull in activity. That little bit of time between something being finished and being released is an opportunity for refreshment, reflection, then a return to action. A half-time break.

In this relatively empty time, Sheri saw an opportunity. She started asking delivery teams to use the space around their releases to participate in, essentially, a bug bash. People put aside their day-to-day duties and those from every type of role worked together for a short period of time to test the product.

At Automattic this activity is called launch wrangling. Apparently when your Company Founder is from Texas there's a strong cowboy influence in naming things!

Sheri has used launch wrangling as an opportunity to introduce testing as an activity to developers who may not have tested before. She also talked about getting a lot of eyes across the application to improve the chances for important problems to be discovered prior to release. This means that launch wrangling is both a coaching tool and a way to improve test coverage.

No matter what type of delivery schedule you adopt, breathing space around deployments is likely to exist in some capacity. In my experience the amount of time available will correlate to the size of the changes being made to production. Big changes create a bigger pause. Utilising this gap seems like a sensible way to appropriately time-box a bug bash activity.

I like the idea of launch wrangling to foster testing across disciplines and improve the scope of testing where resources are particularly limited.

Monday, 6 June 2016

Benefits of cross-team pair testing in agile

One year ago, in June 2015, I launched a pairing experiment in my organisation. The primary purpose of this experiment was to share knowledge between testers who were working in different agile teams, testing different applications and platforms.

I shared the results of our experiment in my talk at TestBash in Brighton earlier in the year. For those who missed this presentation, this is a short written summary of the four main benefits that we observed from cross-team pair testing.

Visibility of other teams

Before we began the experiment I had received feedback from the testers that they felt siloed from their testing peers. At that stage we had 20 testers spread across 15 different agile teams, which meant that many were working as the only specialist tester in a cross-functional delivery team. 

This isolation was starting to seed imposter syndrome. The testers were beginning to doubt their own abilities and feel uncertain about whether they were doing things right.

Happily, one of the strongest themes in the feedback about cross-team pairing was that it increased visibility of what was happening in other teams. The opportunity to understand how another team operated was described as interesting, eye opening and awesome. Seeing other practices and processes gave a degree of comfort to each tester that their own approach was appropriate.

Broader Scope

One of the challenges in being the only test specialist in a team is in generating testing ideas. It can be difficult to consider different perspectives when brainstorming as an individual.

Through pairing, the testers were able to see their application through fresh eyes by exploring with a tester from outside of their product. A different mindset helped them to identify gaps in the application and think of creative ideas to explore functionality. The opportunity to have deep discussions about testing led to the discovery of interesting problems on unexpected pathways.

The broader thinking demonstrated within a pairing session was then carried into future testing as each individual tester started to augment their own planning with ideas they had seen demonstrated by their peers.

Improve communication

Make fewer assumptions. Ask more questions. These are two central tenets of testing that most testers will believe they do. When compared to other disciplines, its often true that testers are asking more questions. Pairing highlighted situations where testers had started to relax these instincts.

The tester who was hosting a session would often make incorrect assumptions about the depth of their visitors knowledge. Their "simple" explanations were difficult for someone from outside of their delivery team to understand. 

The presence of an outsider exposed the amount of assumed institutional knowledge in the business stories, test planning and informal communication of a team. The tester who was visiting a peer would have to ask a lot of questions in order to understand the application and how it would be tested.

Pairing caused the testers to question their own expectations of knowledge in the team. They started to make fewer assumptions about what had been left unstated in team documentation. By increasing the number of questions they asked, the testers began to interrogate whether there was truly shared understanding or instead shallow agreement.

New Approach

Every person will have a unique way of working. Not just in their thinking, but in the particular combination of tools that they use to capture, process and report information. 

Pairing gave the testers the opportunity to observe and experience the work environment of a colleague. In many cases this first-hand experience led to the discovery of a new tool that could be adopted by the visiting tester in their own work. Through pairing we saw chrome extensions, excel macros and screenshot tools propagate across the department.

The proliferation of these tools meant that the testers were more productive. They were able to reduce the repetitive tasks in their workflow and use appropriate tools to support their test approach.


Benefits of cross-team pair testing in agile


For more information about pairing, please refer to my previous posts:

Sunday, 22 May 2016

Changing culture through testing transformation

Imagine that you work in a large testing department with an established hierarchy. There are junior, intermediate and senior test analysts. There are junior, intermediate and senior test engineers. There are test leads, test managers and test directors.

Each person has a very clear notion of their place in the team and what their role is. Work flows down the hierarchy. To be seen as successful, in any role, the tasks received by each person must be completed quickly, quietly, correctly and without complaint.

Pretend that this team has been in place within the organisation for a decade. Some of the senior team members have been a part of this group since its inception. Within the social structures there is respect for experience and deep trust among the team.

Can you picture it?

Your role in this situation is to change the way that the team approach testing. You want to transform their process, improve their techniques, encourage critical thinking, foster debate, challenge the team to embrace innovation and new ideas.

What would you do first?

I believe this is a relatively familiar scenario in testing. The catalyst for testing transformation may be a switch in the entire development methodology from waterfall to agile. Or it could be the transformation that happens when shifting from scripted to exploratory testing, or when picking up specification by example, or when learning to use tools to complement our testing, or any number of other things.

Off-shore Transformation

Michele Cross spoke about "Transformation of a QA Department" at the recent Australian Testing Days conference in Melbourne. Her experience report was from a situation similar to that described above, but where the testing team were located off-shore in the Philippines.

Because of the added complexity of an off-shore relationship, Michele's transformation journey began with some research into the cultural context of the Philippines as compared to Australia. Her intent was not to be critical about cultural differences, but instead to allow them to inform her decision making.

Michelle looked at the six cultural dimensions defined by Dutch social psychologist and anthropologist Geert Hofstede. Under this model there were two particular measures that showed a large difference between the Philippines and Australia:

Hofstede Centre Data [ref]
Jamie Beresford [Ref: ReadyOffshore blog] interprets these differences as:

Power Distance is defined as the extent to which the less powerful members of institutions and organisations within a country expect and accept that power is distributed unequally. It has to do with the fact that a society’s inequality is endorsed by the followers as much as by the leaders.

Australia scores low on this dimension (36). Within Australian organizations, hierarchy is established for convenience, superiors are always accessible and managers rely on individual employees and teams for their expertise.  Both managers and employees expect to be consulted and information is shared frequently.  At the same time, communication is informal, direct and participative.

The Philippines – At a score of 94, is a hierarchical society. This means that people readily accept a hierarchical order in which everybody has a place and which needs no further justification. Hierarchy in an organisation is seen as reflecting inherent inequalities, centralization is popular, subordinates expect to be told what to do and the ideal boss is a benevolent autocrat.

Individualism is the degree of interdependence a society maintains among its members. It has to do with whether people´s self-image is defined in terms of “I” or “We”. In Individualist societies, people are supposed to look after themselves and their direct family only. In Collectivist societies, people belong to ‘in groups’ that take care of them in exchange for loyalty.

Australia – with a score of 90 on this dimension, is a highly Individualist culture. This translates into a loosely-knit society where there is an expectation that people look after themselves and their immediate families.  In the business world, employees are expected to be self-reliant and display initiative.  Also, within the exchange-based world of work, hiring and promotion decisions are based on merit or evidence of what one has done or can do.

The Philippines – with a score of 32, is considered a collectivistic society. This is manifest in a close long-term commitment to the member ‘group’ – be that a family, extended family, or extended relationships. Loyalty in a collectivist culture is paramount and over-rides most other societal rules and regulations (including your company). The society fosters strong relationships where everyone takes responsibility for fellow members of their group. In collectivist societies offence leads to shame and loss of face, employer/employee relationships are perceived in moral terms (like a family link), hiring and promotion decisions take account of the employee’s in-group, management is the management of groups.


Michele then spoke briefly about how trust is formed in different cultures. The Erin Meyer Harvard Business Review article on cross-cultural communication [Ref: Getting to Si, Ja, Oui, Hai, and Da] shows that there is a correlation between high individualism and cognitive trust vs. high collectivism and affective trust.

Slide from Michele Cross at Australian Testing Days

From the HBR article:

Cognitive trust is based on the confidence you feel in someone’s accomplishments, skills, and reliability. This trust comes from the head. In a negotiation it builds through the business interaction: You know your stuff. You are reliable, pleasant, and consistent. You demonstrate that your product or service is of high quality. I trust you. 

Affective trust arises from feelings of emotional closeness, empathy, or friendship. It comes from the heart. We laugh together, relax together, and see each other on a personal level, so I feel affection or empathy for you. I trust you.

If the way we build trust as an individual differs from the way a team expects to build trust, then there is a barrier to us working together.

On-shore Transformation

It's relatively unlikely that you're involved in an organisation with testers in Australia and the Philippines, so you may be wondering how this is relevant. As I listened to Michele speak about her challenges with an off-shore model, I started to think about the parallels to different cultures within on-shore teams.

Think back to the original picture you formed of the large, hierarchical testing team that you had been asked to transform. Imagine that they, and you, are based in Australia. I speculate that the Hofstede measures of this traditional testing team based in Australia would look similar to that of the country of the Philipines.

I believe that the people who work happily in traditional testing teams have high power distance: they are happy to accept instructions and operate within the sphere of power that the hierarchy grants to them. I believe they have high collectivism: they feel extremely loyal to one another and promotions are made with consideration to relationships and experience. I believe they build affective trust: they are not unconditionally trusting of someone with qualifications but are happy to be lead by those who build genuine personal relationships.

Hearing Michele speak about the culture challenges she has experienced off-shore gave me clarity in some of the challenges I have experienced and observed in testing transformation on-shore, where cultural barriers are less explicit. This new awareness is not supposed to be critical, but it can help to inform my decision making.

Changing Culture

Both myself and Michelle would find it abhorrent to imagine asking a person from the Philippines to "stop being Filipino" or to "be more Australian". Thinking about the cultural measures that Michele presented made me wonder whether this sentiment might be felt beneath the veneer of transformation, despite our intentions.

Remember this?

"Your role in this situation is to change the way that the team approach testing. You want to transform their process, improve their techniques, encourage critical thinking, foster debate, challenge the team to embrace innovation and new ideas."

Consider that objective not just as a way of shaping a testing approach. Think of it as a provocation to the culture of the team. Debate and challenge may be at odds with collectivism. Adoption of innovative ideas is often driven by cognitive rather than affective trust. It may be difficult for a tester to improve their techniques without direction or feedback provided through the existing hierarchical communication channels.

How does considering transformation through this lens change your approach?

Wednesday, 11 May 2016

How do you work out what's next?

Recently I've been prompted to think a bit about what's next in my career. To be clear, I'm very happy in my current role and plan to stay there for a while longer. However I'm also in a position where I don't know what the next step might look like.

I've taken up an opportunity to work with a mentor through Cultivate Mentoring Lab. We've met twice and she has already provided me with some practical advice that is helping me to find clarity. Here are three of her thoughts that might be useful to others too.

Connect

I have finally been persuaded to join LinkedIn. My mentor suggested that I use LinkedIn as a means of locating and connecting to people who are already working in roles I may want to do next.

When I say connecting, I don't just mean connection feature of LinkedIn. Rather using the platform to set up real-life interactions. She suggested that I connect with a few people in senior leadership roles by sending a request for a short meeting to ask them three questions:

  1. What was your path to this role?
  2. What was good and bad about your path?
  3. What would you recommend to me?

Part of the reason that I don't know what my next step might look like is that I don't have existing connections to people in the roles that I aspire to. I plan to try and use LinkedIn as a tool for purposeful connection by asking a few others to share their own journey so that I may learn from them.

Research

I always have a search running on the common job sites in New Zealand so that I stay aware of what's happening in the local IT market. It can be frustrating to read the advertisements for roles that might reflect a sensible next step for me. Advertisements tend not to provide much detail about the role. They are primarily marketing to attract the widest possible pool of candidates.

My mentor suggested that I could ask for more information. Where an advertisement includes an organisational HR contact rather than a recruiter, I could ask this person just to share the job description and position requirements with me. It had never occurred to me that I could do this!

If this information is only available to applicants, she suggested that I start applying for the roles purely to gain visibility of what they might entail. The notion of applying for something that I know I am not qualified for is again quite foreign. I know that I'll want to be selective in this approach, as I don't want to create overhead for the people managing these types of vacancies, but it seems a great way to get a deeper understanding of the opportunities that are available.

Refine

Specialist recruiters are often used in IT hiring because they are able to filter technical CVs to meet an organisation's specific criteria. My mentor suggested that I contact the recruiters who undertake this work for the roles that I aspire to.

I plan to request a short meeting to get their opinion on what is missing from my CV. Given the volume of CVs that a recruiter will process, it should be relatively quick for them to identify what skills are missing and how the presentation of information is lacking with respect to other candidates.

This feedback will help me identify where I need to learn more in order to meet the requirements of future roles and give me ideas on how to best present my talents for these type of vacancies.

*****

I've found it immensely useful to have practical ideas that will help me determine what my options are. As I work to connect, research and refine, I hope to work out what might be next for me. Perhaps these ideas will help you work out what's next too.