Testing is essential. But when it comes down to it, there’s still one big question: Are you testing it manually or are you automating the process?
And if you’re wondering, yes, there’s a difference. Both have their place, but in this chapter, we’re going to focus on manual testing — where real humans (that’s you) are the ones driving the tests. We’ll cover what it is, who does it and how to organize it all without losing your sanity.
Manual testing is exactly what it sounds like — testing done by humans. It’s the opposite of automated testing, where specialized software handles the heavy lifting.
However, the whole idea of dividing testing into “manual” and “automated” is a bit odd if you think about it. The only part that’s truly automatable is the test execution itself. Everything else, like creating test cases, analyzing results and figuring out what even needs testing, is still pretty manual in both processes.
But as AI keeps advancing, that boundary is getting thinner every day. Soon enough, humans may have even less work to do. But hey, let’s not get ahead of ourselves — Chapter 8 dives into the future of AI-driven testing. For now, let’s stick to the manual side of things.
The people who perform manual tests are often called "manual testers." And, naturally, manual testers come in two main breeds:
If we’re being cheeky, we could add a third category: accidental testers — those users who stumble across bugs in production that weren’t caught earlier. But if you’re reading this far, we trust you wouldn’t let that happen. Right?
Professional testers are well-versed in testing methodologies and know how to root out software bugs. For them, the line between "manual" and "automated" doesn’t really exist — they’re skilled at both.
Occasional testers, on the other hand, are usually employees recruited to test software they’ll be using on the job. They’re the SMEs (subject matter experts) in their respective business processes, and that’s why you’ll often hear them referred to as “business testers.”
When planning a manual testing effort, there are a few key questions you need to answer. Don’t worry, they’re straightforward — but it’s the details that will make or break your success:
Sounds like a lot, right? Don’t panic. These questions are simpler than they seem. The real challenge comes when you skip over them entirely, and that’s where most organizations trip up. But don’t worry, we’ve got you covered. Let’s break each one down.
Before diving into any other types of testing, your first priority should be making sure developers perform unit testing on their code. This helps catch those trivial programming errors early. If anyone objects, just tell them it's called Shifting Left — trust me, they’ll be so impressed with your jargon that they won’t dare push back.
When you add something new or change something existing in your application, anything can break. But you don’t have the luxury of testing everything all the time, right? Even if you did, it probably wouldn’t make sense. You need a strategy. Here’s a recipe that works for manual testing most of the time:
Points 1, 2 and 3 are pretty straightforward. But points 4 and 5? That’s where things get tricky. There’s no magic method for knowing what’s at risk of breaking. If you’ve worked with the application long enough, you’ll develop an instinct for what tends to go haywire when things change. But what if you’re new or just inherited a new application? So far, Marvel hasn’t released a superhero with test intuition superpowers, so you’re on your own there.
It’s important to note that “test everything that’s new” is a bit more vague than it sounds. For example, let’s say a new report feature is introduced. You’ll want to open it, check the results and make sure it works as expected. Great, right? Well, not exactly. Have you tried to break it? Did you test it in scenarios where things could go wrong — like whether it’s accessible to users who shouldn’t have access or editable by people who should only be viewing?
This is where risk-driven testing comes in. The goal is to prioritize what you test based on the likelihood of something breaking and the severity of the consequences if it does. New features are more likely to break than ones that have been around for a while. But that doesn’t mean you can ignore existing features, especially those related to new changes.
In summary, always prioritize testing that focuses on high-risk areas. New or changed features are the most obvious, but don’t forget to test components that are indirectly affected and could cause significant damage if they fail.
We know you are so looking forward to testing, but first, take a deep breath and make a plan. Once you know what needs testing, you’ll be able to estimate the effort and decide what skills your testers will need.
If you're practicing DevOps or agile software development, your test plan should already be baked into your process. Right? The whole idea of agile is frequent, incremental deployments with minimal disruption, which means testing should be ongoing and almost continuous — ideally, much of it automated.
But if you’re not quite there yet, that’s okay. Most teams aren’t. Just know that if you're not practicing agile fully, you’re missing out on some of the benefits of DevOps, like the ability to catch defects early and accelerate software development.
There’s Waterfall, Agile and then there’s what most companies actually do — which is somewhere in between. Let’s call it “more agile” or “less agile.” Assess where you are honestly, and aim for continuous improvement.
In agile, the first round of testing is often manual testing to give developers quick feedback. This testing is ideally fast and continuous, whether done by a dedicated tester or someone balancing testing as a part-time role. And yes, manual testing can sound costly, but frequent, incremental testing is usually faster and cheaper than massive, infrequent testing campaigns.
Once new functionality is manually tested, the next step is to automate those tests, so you don’t need to keep testing the same things manually. These automated tests can run whenever needed, reducing the burden on your testers — though someone will still need to click that start button until you fully integrate automated testing into your pipeline.
Now, you might ask, "Why manual testing first if automated testing is supposed to be faster?" The truth is, while Salesforce automated testing is faster, setting up automation takes time — time that can be saved by running a quick manual test first.
If you're deploying infrequently and in larger chunks, you’ll need full testing campaigns with more people. This may sound expensive compared to agile's frequent testing, but it’s not necessarily. The difference is that continuous testing spreads the workload over time, while a "big chunk" approach is less frequent but higher-effort. The latter can be slower and riskier, but it’s sometimes necessary.
Even in agile environments, there’s room for bigger testing campaigns. Many teams deploy frequently to pre-production environments but roll out to production less often. In this case, larger User Acceptance Testing (UAT) campaigns can make sense, especially if they’re aimed at ensuring real users are comfortable with the system or giving formal approval for a release.
Your testing plan will depend on your overall development process. Agile? You’re likely testing continuously in small increments. Less agile? You’re probably organizing bigger testing campaigns. Either way, the goal remains the same: find and fix defects fast to improve productivity and reduce risks.
The people you have available for testing play a huge role in how you organize your manual testing effort. If you’re working with business users, you can’t expect them to design tests from scratch. That’s where you or a testing pro comes in to create the tests for them. There are two primary approaches for business users:
In scripted testing, you provide testers with detailed “type this, click that” instructions. It’s essentially robotic work for humans — but there’s a better way. Enter task-based testing, where you give your testers a broader task like "create a new lead" or "run a report on recently opened opportunities."
The beauty of task-based testing is that it’s more realistic. You won't know every step your testers take, but you’ll know if they complete the task successfully or encounter any issues. The tasks should be scoped well, but unlike scripted tests, the tester isn’t hand-held through every click. They’re given a goal, and it's their job to achieve it and report anything unexpected along the way.
The third approach, exploratory testing, is best suited for professional testers and subject matter experts (SMEs). Now, a business user might occasionally be motivated enough to dive into exploratory testing, but let's be honest, that’s rare.
Exploratory testing takes things up a notch. Instead of rigid instructions, the tester is given a general brief about the business process and then asked to—well—explore. It sounds more fun, right? Imagine telling Julie, "You're not working, you’re exploring!"
We'll dive deeper into these approaches in later sections, but here's the truth: In real life, you often don’t get the ideal testers you’d hope for. Instead, you get the ones who happen to be available.
If you have business users testing, let them focus on testing business processes — that’s their wheelhouse, and they’ll do it well. If you have professional testers, they can push the application’s limits and test those "unlikely scenarios" that, trust us, will happen in real use. People always manage to find creative ways to break things.
Having at least one experienced professional tester on your team is invaluable. Not only will they improve your testing quality, but they’ll also free up your business testers to focus on what they do best.
And remember that these Salesforce deployment best practices are key to ensuring smooth, error-free releases.
The skill level of your testers will dictate the kind of guidance they need. You’ve essentially got three options for briefing them:
This applies to three very different kinds of testers:
There’s also a fourth category: people who are experts in both the business process and in testing — but like unicorns, they are rare. If you have one on your team, congratulations!
The testers who "know nothing" need a detailed step-by-step description of each task. For example:
For this kind of testing, you don’t need an expert in the business process — anyone who knows the basics of Salesforce can follow a scripted test. But, ideally, you shouldn’t be doing this kind of testing at all. If you’ve got time to prepare such detailed instructions, you’ve got time to automate those tests.
With modern test automation tools (especially AI-based ones), creating automated tests isn’t much different from writing out a detailed, human-readable script. Excited to read Chapter 8 on automated testing yet?
This script example also highlights the essential elements of good test design. A well-defined test will include these key parts:
For business testers, a task-based approach works best. By assigning them a task—like “Create a new lead” or “Find and run a report of recently opened opportunities”—you’re seeing if they can accomplish the task without a detailed script. This approach mimics real-life scenarios.
Here’s an example:
For professional testers or more advanced business testers, a goal-based test is ideal:
This kind of test is more abstract but gives the tester the freedom to explore potential risk-driven areas in the application. Goal-based tests are perfect for testers who know how the application should work and what problems to look for.
Yes, you read that right — areas where things could go wrong. Professional testers aren’t just checking to see if the application works; they’re trying to prove that it doesn’t. That’s the attitude that makes them effective.
When briefing your testers, ask yourself: What don’t they know? Professional testers generally need a thorough brief about the purpose of the application, any new or changed features and areas with the highest risk. Once they understand the context, they know how to test.
On the other hand, teaching testing strategies to business users can be a waste of time — no offense to them. It’s not that they couldn’t learn, but it’s not their focus. They’ve been asked to test the application, and they’ll do it, but their main job isn’t testing. If you have a curious business user, take the opportunity to introduce them to some basic test design concepts — they might become a great asset to your team, a potential unicorn in the making.
There’s one thing, though, that you must teach all your testers: how to report their findings. We’ll cover this in detail later in the chapter.
Also, make it a habit to always explain the purpose of a test: What are we testing, and why does it matter? Giving testers this context improves their motivation and sharpens their focus, helping you get better, more effective results.
And don’t forget — testing isn’t intuitive for most people. Remind your business testers that finding a problem or not knowing what to do is a win. That’s the entire point of testing: to discover what’s not working as expected.
In task-based testing, a tester is given a task — like "Create an opportunity" or "Add a date filter to a report." Sometimes, this task can be as complex as an entire business process, end to end. The advantage of task-based testing is that the tester is simulating how a real user would interact with the application. As a result, they’re more likely to encounter issues that a scripted test might miss. For example, the app might technically function as intended, but a button might have an odd label that confuses the tester. This is a usability issue that should be fixed. A scripted tester, however, would follow instructions and might miss the issue entirely.
Exploratory testing takes task-based testing a step further. In addition to trying to accomplish the task (often referred to as the "happy path"), an exploratory tester will ask questions like:
In short, exploratory testers don’t just follow instructions — they explore the app, looking for unexpected behaviors. And yes, sometimes, they even ask ridiculous questions like, "What if this solved world peace?"
Scripted testing, by contrast, is more rigid and less creative. Its merit lies in its repeatability, but for that very reason, it’s better suited for a test automation robot. If a human tester is simply following a script, it’s likely a waste of their potential.
That said, exploratory testing, while powerful, has its drawbacks — it lacks repeatability. A tester might stumble across a problem but not remember how they got there. Worse yet, after the problem is fixed, they might not know how to test if it was resolved correctly. Don’t worry — we’ll cover these challenges later in the book.
It may sound ridiculous, but one of the biggest challenges in manual testing is simply getting the application into the hands of the testers. Most Salesforce testing campaigns start with confusion — finding the right Salesforce org, using the correct user credentials and ensuring everyone has the appropriate permissions.
Besides just wasting time, these issues kill the motivation of testers. So, make sure you’ve planned ahead: ensure the right version of the application is in the test environment, verify test user IDs and double-check that permissions are correct. And don’t forget to provide written instructions to every tester. Why? Because they’ll forget most of what you tell them — because, well, they’re human. (Unicorns, on the other hand, don’t forget anything. But good luck finding one.)
Before diving into the testing tasks, you should perform a Smoke Test. This is a quick round of basic tests to check if the application is stable enough for more in-depth testing. Occasionally, during active development, developers release a version that’s... let’s just say, not quite there. The smoke test ensures the app’s basic functionality is intact, so you’re not wasting your testers’ time.
The best part? Smoke tests should be fully automated. They’re a simple but crucial step to prevent major headaches later.
Testing is essentially useless if the problems your testers find aren't properly recorded. And let’s face it, even the most professional testers tend to forget one issue the moment they stumble on the next. That’s why problem reporting needs to be seamlessly integrated into your process, and testers need constant reminders to report problems as soon as they find them.
If you have a problem tracking system like Copado Plan or Jira, use it. If not, a document with a problem report template will do. But the key is to capture the right information. A great problem report typically includes these components:
Finally, problem reports usually note the severity of the issue. But don’t leave this to your testers. It's better to assess the severity yourself or hand it off to someone who can evaluate it from a business perspective.
When business testers are involved, it's best to have them test simultaneously. While technology allows people to test at their convenience, these testers usually have other full-time responsibilities, making it easy for them to forget about testing altogether if you don’t set a specific time. Delivering your brief just before the tasks also ensures that the testing pipeline runs smoother and more efficiently. Plus, your dedicated supervision and assistance during testing will increase productivity and help keep everything on track.
If you have multiple testers, assign different tasks to each one. This reduces the number of tests any one person has to complete and ensures they're more likely to focus and engage with the tasks. It also increases the chances they'll show up for future testing cycles. In short, respect their time by being organized and giving them manageable workloads.
A good strategy is to have every tester run through the happy path of the business process to catch any major usability issues. Then, assign different testers to handle edge cases and error behaviors. This way, you maximize coverage and ensure all potential issues are addressed.
Finally, provide clear, written instructions to all testers, and be available to clarify any questions that may arise during the testing process. This proactive approach will keep things running smoothly and make sure all testers stay on the same page.
Manual testing will always have a role in your testing strategy. As we've discussed, sometimes it's more cost-effective than automation, depending on the context. In the next chapter, we’ll dive into calculating the Return on Investment (ROI) for test automation and how to determine when automation makes sense.
Planning is non-negotiable. Even if your testing needs are straightforward, you’ll still need to consider the seven key questions we covered in this chapter. Simpler features might call for a simpler plan, but you can’t skip the planning phase.
Your human testing resources are valuable. Respect their time and plan thoughtfully, especially if you want to maintain a strong, engaged team of testers in the future.
So, ready to reduce your reliance on humans? In the next chapter, we’ll explore how AI is transforming the testing game and improving the ROI of automation.
Read On!
Level up your Salesforce DevOps skills with our resource library.