Articles
1/2/2025
10 minutes

Chapter 6: Test Case Design

Written by
David Brooks
Table of contents

In the realm of automation, much like in art and photography, principles of design play a crucial role in crafting successful projects. These principles––shaped by countless quiet battles with failing tests at 2 AM––form the bedrock of Copado's Keyword automation strategy, focusing on core aspects essential for robust automation solutions.

Principles of Test Case Design

Consider the bitter irony: test frameworks exist to bring order to chaos, yet sometimes these very solutions become wellsprings of complexity. 

Six key principles guide test case design:

  1. Maintainability
  2. Usability
  3. Reusability
  4. Efficiency
  5. Security
  6. Reliability

These principles form the foundation for creating robust and scalable automation frameworks. Not because they’re perfect––they rarely are––but because they work. By adhering to clear guidelines for keyword usage, documenting processes effectively and prioritizing readability, teams can ensure that automation efforts yield reliable results with minimal maintenance overhead. Though if we're being honest, "minimal maintenance" often feels more like an aspiration than a reality.

These principles build on the early bug detection strategies we discussed in Chapter 1. Remember that staggering statistic about bugs costing up to 640 times more to fix when caught late? Well, solid test case design is your first line of defense.

Let’s explore each principle –– not just the theory, but what they mean when perfect plans meet imperfect systems. 

Maintainability: Adapting to Change

Maintenance is often the Achilles' heel of test automation projects. The quiet truth lurking behind pristine automation strategy: keeping test cases agile and adaptable amidst software updates is where the real battle begins.

Continuous updates in the software under test can exponentially increase maintenance efforts and, even worse, make testing take longer, thus slowing down the whole release cycle. Building your scripts using concise, atomic keywords, such as those in Copado Robotic Testing, ensures clarity and ease of maintenance across projects. Even when using other tools and test script languages, choose names that will be understood years after you write the test, when some other poor soul is viewing it for the first time.

Sometimes the best automation strategy isn't about perfect design –– it's about anticipating the inevitable chaos of change.


Selenium Script 🙄

Copado Robotic Testing QWords Script 🙂

Usability: Simplifying Complexity

Usability in automation extends beyond mere functionality to the ease of understanding and scalability. Keywords that mimic human interactions with the application under test simplify the learning curve for automation engineers.

The reality of test automation isn't found in perfect frameworks, but in the daily struggle to keep scripts intelligible.

Unlike other techniques, such as Behavior Driven Development (BDD), which can result in the proliferation of high-level functional keywords and obscure step-by-step details, keyword-driven automation scripts remain transparent and intuitive.

To the merit of the BDD approach, it works fine if the architecture of your test scripts is well-thought and rigorously maintained, but––in real life––that’s never the case. Test maintenance happens in a rush so that the release can proceed. Your scripting technique must support “rush-driven testing” without compromising the quality and maintainability of your test scripts.

A small number of atomic script keywords have proven best for the purpose.

Reusability: Leveraging Versatility

The versatility of keywords enables their reuse across different projects and platforms. (A promise that feels almost too good to be true –– until you see it working in the trenches of real-world testing.)

While specific implementations may vary for web, mobile or other interfaces, fundamental keywords like ClickText and TypeText maintain consistency in test automation strategies. If you think of testing any ordinary application, some 80% of all steps you take is clicking items identifiable by a text label (“ClickText”) and entering something into a labeled input field (“TypeText”).

This uniformity isn’t just about efficiency –– it's about survival in a landscape where every interface change threatens to unravel carefully crafted test suites.

This consistency not only enhances efficiency but also streamlines training and implementation across diverse environments.

Efficiency: Streamlining Resources

Efficiency in test automation hinges on optimized design and resource management. (The quiet violence of poorly managed test suites isn't in their failure, but in how they slowly strangle velocity to death.)

Minimizing redundant operations and leveraging robust keyword design principles ensure streamlined test case execution. Techniques like managing System States effectively mitigate unnecessary delays, enhancing overall test suite performance within your ci/cd pipeline testing framework.

Each test case you write assumes the system to be in a certain initial state. But each test case you run also changes the state of the system. How you implement the initialization and cleanup affects the overall efficiency of your tests and, in the case of a long test suite, you may want to arrange the subsequent tests so that the earlier test, if successful, leaves the system to the state the next test expects.

Systems have memory, even when we wish they didn't.

Here is a simple example: If each test case assumes the user is logged in, it is probably a good practice to log the user in only once instead of doing it at the beginning of each test case. 

Security: Safeguarding Test Data

Security is paramount in any automation framework, particularly in environments involving real data. Each test script becomes both shield and potential vulnerability –– there's no middle ground.

Adopting secure practices isn’t just about checking boxes for compliance –– it’s about acknowledging the profound responsibility of handling sensitive information. When test scripts contain usernames and passwords, they become more than automation tools; they’re potential keys to the kingdom, waiting to be misused.

The brutal simplicity of it haunts every security audit: sensitive data embedded in test scripts isn't just bad practice –– it's a time bomb.

Choose testing tools like Copado Robotic Testing that ensure secure data handling, align with stringent SaaS requirements, and enhance overall reliability.

This balance between security and automation becomes especially crucial when implementing compliance testing methodology, where each test must serve both functional and security requirements.

Reliability: Ensuring Consistency

Reliability should underpin your approach to test case design. (Each failed test represents more than just broken code –– it's a ripple of doubt that spreads through your entire deployment pipeline.)

Those endless timeouts or sleep commands in test scripts? They’re band-aids masking deeper wounds. By eliminating these unreliable practices, you enhance fault tolerance and make troubleshooting less like searching for a needle in a haystack. Systematic error recovery within your Salesforce testing framework isn’t just good practice –– it’s about building trust in your automation.

The truth about reliable tests isn't in their success rate, but in how they fail. When crafting automated test scripts, every recovery procedure and error handler becomes part of your system's story.


Core Testing Techniques


Happy Path Testing

The very phrase 'Happy Path' carries its own dark irony in the testing world.

At its core, the Happy Path represents our most optimistic assumptions: users who follow instructions perfectly, systems that respond exactly as designed, processes that flow without interruption.

It focuses on the most common and straightforward user interactions, ensuring that the application performs correctly under normal conditions. You verify that primary functionalities work as intended and the users can complete their essential tasks.

Happy path testing shows that a system meets its intended use case but can't guarantee graceful handling of error conditions. It does not seek to handle edge case scenarios. The most revealing failures emerge when users venture off our carefully mapped routes –– doing things they "weren't supposed to do" while our applications respond with digital confusion.

Even though it sounds like a term from Mister Rogers Neighborhood, the exact origin of the term "Happy Path" remains unclear. It appears throughout software engineering literature, often interchangeable with "happy flow" or "sunny day scenario," highlighting its role in validating expected outcomes​.

As we explored in Chapter 3's deep dive into test levels, the happy path is just one layer of your testing strategy. Sure, it's important to verify that everything works when users behave exactly as expected (wouldn't that be nice?), but remember –– real users have a knack for finding creative ways to break things.

Negative Testing

The term "negative testing" sounds almost polite –– as if we're merely exploring alternative scenarios rather than actively trying to break everything in sight.

If Chapter 2 taught us anything about Salesforce's complexity, it's that there are countless ways for things to go wrong. The 'Happy Soup' of interconnected customizations and configurations means negative testing isn't just about breaking things –– it's about understanding how your system responds when those carefully crafted integrations face unexpected inputs.

Negative testing or sad path testing, focuses on ensuring that software behaves correctly under invalid, unexpected or error conditions. Here are some typical negative case tests to consider:

Invalid Input Handling

Empty fields. Incorrect data types. Special characters that shouldn't exist. They're all waiting to expose the fragility of our systems. With Salesforce, testing required fields and proper data typing isn't just validation –– it's verifying that someone hasn't quietly changed these critical settings when no one was looking.

Sometimes *invalid input” carries its own twisted logic: perfectly valid in one context, yet catastrophic in another. Consider a corporation's name that spans multiple lines of text. Your customer registry must accommodate it, but what happens when that same name needs to fit on a shipping label? These edge cases hide in plain sight, waiting to remind us how assumptions about "valid" data eventually shatter.

Testing negative scenarios means embracing the chaos of real-world data:

  • Empty Fields: Submitting forms with required fields left blank
  • Incorrect Data Types: Entering text where numbers are expected
  • Special Characters: Inputting escape sequences in fields
  • Incorrect Data Values: Testing boundary conditions like negative quantities

Boundary Testing

Many fields have implicit or explicit boundaries. Like the edges of a map where ancient cartographers wrote "here be dragons," these boundaries mark the difference between controlled testing and systemic chaos.

Some numeric fields may expect numbers within certain ranges. Date fields might only accept future dates, or past ones. Sometimes these boundaries emerge crystal clear from the context, but doubt often lurks –– a reminder that even the most basic assumptions need explicit definition in the requirements.

Testing these edges isn’t just about validation:

  • Upper and Lower Limits: Testing just above and below boundary values reveals where our systems start to fray. 
  • Exact Boundaries: The precise moment when “less than” transforms into “equal to” often exposes critical system assumptions. 

Because the application you’re testing contains programmed logic for ensuring entered values stay within allowed bounds, testing boundary values becomes more than routine validation. Here, at the edges of acceptable input, we discover what our systems truly believe about data –– and what they’re willing to reject.

The truth about boundaries in software: they're less like solid walls and more like undefined territories where business logic meets edge-case reality.

Error Message Testing

Each error message represents a moment of systemic failure –– a place where our careful abstractions crumble and users confront the raw machinery beneath.

When testing error handling, we face two distinct realities: The mechanical truth of whether messages appear when they should, and the deeper question of whether those messages actually help anyone. Testing can confirm the former with clinical precision, but the latter requires a more nuanced understanding of human confusion and frustration.

The fundamental elements we test:

  • User Feedback: Ensuring appropriate and clear error messages appear for invalid inputs –– though appropriate often feels like a shifting target 
  • Error Codes: Verifying correct codes return in response to failures, each one a small testament to something gone wrong

The first part belongs squarely in the realm of automation. But the second part––evaluating whether messages actually guide users through moments of confusion––that's where human judgment becomes irreplaceable. This reality splits our testing between User Acceptance Testing and automated validation, each serving different aspects of the same complex need.

Authentication and Authorization

Users access functionality in Salesforce through credentials that shape their permissions –– a delicate architecture of trust that too often becomes either prison or revolving door.

Testing authentication isn't just about verifying access rights. It's about catching those moments when a user can do too much, when permission quietly expands beyond their bounds. Positive tests confirm users can perform their assigned tasks, but the deeper challenge lies in detecting when they've been granted powers they were never meant to have.

Recall our discussion in Chapter 2 about Role-Based Access Control (RBAC) and its critical role in Salesforce security. When designing test cases for authentication, we're not just checking if users can log in –– we're verifying that the entire permission structure holds up under scrutiny.

The critical elements we must test:

  • Invalid Credentials: Though Salesforce handles basic login validation, we can't rely solely on platform security
  • Access Control: Testing restricted areas with various permission levels exposes the gaps between intended and actual access
  • User Profiles: Testing with Admin rights masks the daily reality of restricted users –– a blind spot that has broken countless deployments

Why does this matter? Because testing with an Admin login that has View All and Modify All data means you'll never catch those critical moments when a user should have been stopped from accessing restricted information. Each overlooked permission becomes a potential breach waiting to happen.

Session Management

The reality of session management lurks beneath every user interaction –– a digital shadow that grows longer with each passing moment of inactivity.

When a user logs into Salesforce, time becomes both enemy and constraint. Sessions expire. Users forget windows left open, then login again elsewhere. The system must navigate these human moments of forgetfulness and multitasking without compromising security or functionality.

Two critical scenarios demand our attention:

  • Session Expiry: What happens in that liminal space between active use and timeout? Each interaction after session expiration tells a story about system resilience
  • Multiple Sessions: The quiet chaos of users logging in from different devices, each session a potential point of conflict or confusion

These aren't just edge cases –– they're the daily reality of how humans interact with our systems. Every expired session and duplicate login represents a moment where technical requirements collide with human behavior.

File Uploads

In the end, files become more than data –– they become artifacts of our assumptions about what users should and shouldn't do.

Many use cases require files to be uploaded into the system. This seemingly simple interaction hides layers of potential failure, each one waiting to expose our system's limitations:

  • Unsupported Formats: When users inevitably try to upload files your system never anticipated
  • Exceeding File Size Limits: That moment when business requirements meet storage realities
  • Empty Files: A deceptively simple test case that often reveals deep system assumptions about content

Think about it: uploading an empty file rarely causes immediate damage. But later, when another process assumes that file contains something meaningful –– that's when assumptions cascade into failures.

The paradox of file handling isn't in the complexity of what we test, but in how often the simplest scenarios reveal our system's deepest vulnerabilities.

Security Testing

A cruel paradox: every security measure we implement becomes both shield and potential weapon –– ready to be turned against us through the sheer weight of human ingenuity.

Salesforce handles platform-wide security testing, but your organization should determine if you should be testing these common negative tests as well.

Malicious Input occurs when users enter database commands (SQL) into regular input fields. Instead of typing their name or address, they input code that could force the system to execute unauthorized database operations. It’s a reminder that every text field is also a potential entry point –– we must verify that our system treats all input with appropriate suspicion.

The vulnerability isn't in complex encryption or sophisticated attacks –– it's in assuming users will only enter what we expect them to enter.

Script Injection follows a similar logic: we check if users can input script code into fields, testing if the system will accidentally execute these unauthorized commands (Cross-Site Scripting (XSS) vulnerabilities). 

Performance Testing

Performance testing in Salesforce lives in a strange limbo –– crucial for survival but trapped behind layers of bureaucratic permission.

In other words, it’s often overlooked in the Salesforce world until there is a problem.

The authors witnessed this firsthand: a job application site using Salesforce Communities collapsed under thousands of applications in a 15-minute window. Not a gentle degradation –– a complete system failure when reality collided with untested performance limits, causing the pages to fail Understanding performance means confronting uncomfortable truths:

  • Your Salesforce Sandbox doesn’t mirror production performance 
  • Salesforce explicitly prohibits performance testing in production 
  • Getting a proper performance testing environment requires special permission from Salesforce

Here’s what we’re really testing:

  • High Load: When thousands of concurrent users crash against system limits
  • Resource Limits: In traditional computing, we’d test low memory and disk space. In Salesforce, we watch millions of records strain against invisible boundaries

The past few years of remote work have exposed new fragilities. Users wrestle with:

  • Interrupted Connections: Systems must gracefully handle network instability
  • Slow Response: Every timeout becomes a small crisis of user confidence

Plan your performance testing early. These environments aren't always available when you need them most.

Intent Based Testing

Enough with the negatives already! Let’s get back to a little positivity.

Intent Based Testing (IBT) starts with a clear understanding of what the users or stakeholders actually want to achieve. Not what we think they want, not what our documentation claims they need, but the raw reality of their goals. This means working closely with stakeholders to gather detailed requirements and understand the context in which the software will be used.

Test scenarios emerge from this understanding –– not sterile input/output matrices, but reflections of how humans actually navigate our systems. Unlike traditional test cases that might focus on specific inputs and outputs, IBT scenarios capture something more essential: the goals driving user behavior.

When AI enters this equation, something shifts:

  • Written goals transform into executable tests without demanding technical expertise
  • More tests emerge faster, though speed isn't always salvation
  • AI interprets user intent, bridging the gap between human desire and system reality
  • Quality assurance finally speaks the language of business needs

For new testers, or those transitioning from manual to automated testing, IBT offers a gentler path forward. No more drowning in automation frameworks or scripting languages –– just humans expressing what they need, and systems learning to understand.


Common Pitfalls and How to Avoid Them


“Bugs” and the Three Misses

Software carries the weight of human error in its DNA –– each bug a small confession of our fallibility.

“Bug” is what developers call their mistakes, a gentle euphemism for human error These defects emerge from three fundamental misses: 

  • mistakes, 
  • misunderstandings, and 
  • misassumptions.

Each one reveals something painfully human about the way we build systems.

Developers make mistakes. They write code they never meant to write. A date comparison flipped from "later than" to "earlier than," a null check forgotten in the rush to meet deadlines. Simple, devastating lapses that ripple through systems like cracks in foundation stone.

Misunderstandings bloom in the gap between what specifications say and what they mean.  Consider a requirement like “enable the OK button when the user enters a correct date.” The business analyst, intimately familiar with context, knows “correct” means any future date. The developer, working from pure logic, checks only that the date exists.

Misassumptions complete this trinity of human error. One developer assumes “future date” means anything after today itself, while the business meant to include today itself. These small interpretive gaps spawn bugs that feel almost inevitable in hindsight.

These three misses create defects that manifest as faults when used. Sometimes the only damage is an annoyed user and wasted time. Sometimes there's more at stake. Each bug represents a moment where human understanding failed to bridge the gap between intent and implementation.


Advanced Testing Approaches


Testing isn't about finding every bug –– it's about understanding which failures will hurt the most, then watching them emerge like dark prophecies in production.

Testing means methodically walking through application paths with different inputs, observing how systems respond. But here’s what senior testers know in their bones: you decide the expected result before executing a test. Without this forethought, defective behavior slides past in peripheral vision, a ghost you’ll meet again in production.

A professional never executes a test step without first asking what should happen next.

Every test begins with the “happy path”–– that optimistic journey where users do exactly what we expect. If the path fails, testing anything else becomes an exercise in futility. The happy path usually works, if only because developers probably tried it themselves.

Once you trust the software does what it should, you make it do what it shouldn’t. Two main techniques drive this destructive testing:

  • Taking unexpected paths
  • Supplying different data

The simplest example? Click your browser’s Back button. Many applications, including Salesforce, respond with bewilderment. But the easiest way to break almost any application is to give it erroneous data. Users will absolutely do this –– not on purpose, but inevitably. The application must protect itself, detect the error, inform the user. Most manage this sometimes. Almost all fail occasionally.

Recall the date field example?

Your application expects a valid future date. The tester methodically tries a handful of valid dates. If next Monday works, next Tuesday probably will too. Testing every future date isn't just impossible –– it's a form of professional self-harm.

Then comes a past date, say the third of last month. The format is correct, but the application should reject it. The expected behavior? An error message. When that works, other past dates likely will too.

But the present––that liminal space between past and future––this is where testing becomes an art form. A professional tester circles this boundary like a detective at a crime scene. Today's date. Yesterday. Tomorrow. The edges of time itself become a testing ground.

Then reality breaks in with clearly wrong inputs: negative dates, empty fields, the 32nd of any month, the mystical 13th month, the year 0. Each impossible value probes not just the code, but our assumptions about what "impossible" really means in systems built by humans for humans.

As we explored in Chapter 5's deep dive into automation ROI, the speed of your testing feedback loop can make or break your development cycle. Fast, reliable test cases aren't just nice to have –– they're essential for maintaining the rapid pace of modern Salesforce development.

Equivalence Partitioning and Boundary Value Analysis

We give fancy names to testing techniques like desperate poets trying to romanticize the mundane tragedy of human error.

These examples describe two simple, effective testing techniques that professionals have deliberately named in ways that make them hard for mere mortals to understand. Perhaps it's easier to face the endless cycle of finding bugs when we wrap it in academic language.

Equivalence partitioning. A grandiose term for a brutally simple concept: if one past date triggers an error, other past dates probably will too. Those two dates belong to an equivalence class –– they make the application behave the same way.

We group inputs by how they make the system respond, then test one from each group. Not because it's perfect, but because testing everything would mean never shipping anything.

Successful here comes from understanding how developers build their logic gates. The tester knows the developer has written code that distinguishes between past dates and future dates –– not because they’ve seen the code, but because they understand how humans translate business requirements into binary choices. They also know developers rarely waste time distinguishing between individual past dates or future ones.

Boundary-value analysis sounds like academic posturing. In reality, it’s about finding those precise moments where systems reveal their deepest confusions. When does ‘past’ become ‘future’?

Mastering Testing Techniques

What separates good testers from lousy ones isn't flashy tools or certifications. It's mastering three fundamental elements: expected results, equivalence partitioning and boundary-value analysis. But the best testers? They’ve learned something deeper, something that emerges only through countless hours of watching systems fail in ways documentation never predicted. 

They call it heuristics –– a fancy academic term for what’s really just hard-won wisdom about where systems break. It’s a collection of rules of thumb, each one earned through late-night deployments and impossible-to-reproduce bugs. 

Experienced testers have internalized patterns: which errors developers typically make, what types of functionalities are prone to errors and which tests expose the gaps between what we built and what we meant to build. It’s like a sixth sense and sometimes, they can’t even explain how they know –– the knowledge lives somewhere between instinct and scarred experience. 

(But you don’t need mystical testing intuition.. Just go to the internet and google “test heuristics.” Start learning from other people’s battle scars before you earn your own.)


Optimizing Test Timing and Speed


What to Test

The brutal truth no one tells you in testing certification courses: every normal application contains more possible test paths than your career has hours.

And those paths? Each one may contain a defect or a problem, a small crack that will eventually spread into a system-wide failure. The art of knowing what to test is about knowing which parts, if they fail, will cause the most damage. Which failures are most likely to misbehave. Which ones lurk closest to critical business functions. Which bugs will wake you at 3 AM. The problem is you must guess, because you never know for certain.

Professional testers can, however, make very smart guesses. Not because they want to, but because comprehensive testing died somewhere between agile deadlines and exponential system complexity. Here are some rules they’ve written in late-night deployment fixes and customer escalations:

  • Severity rule: Functions that could cause severe damage when they fail are always high in the tester’s priority list –– not because they’re most likely to break, but because we can’t afford the cost if they do
  • Novelty rule: New or changed functions are more likely to contain defects than unchanged functions. But remember, the inner workings of a software application are interdependent in various ways. Sometimes programming errors in new code manifest themselves as malfunctions in old code
  • Complexity rule: The more complex the feature, the more data it touches, the more algorithmic processing it requires –– the more likely it hides defects in its depths
  • Accumulation rule: Previously defective functions carry their history like scars. They’lll likely break again
  • Replication rule: Find one fault, and its siblings  likely lurk nearby in similar code
  • Reoccurrence rule: When a bug resurfaces after being "fixed," it's telling you something profound about your system's DNA. This truth extends beyond software –– just consider every recurring banking glitch, car repair or healthcare breakdown you've experienced 

When to Test

Testing exists in that painful space between what's possible and what ships to production –– a gap that grows wider with every agile sprint.

Conventional wisdom whispers that finding defects early costs less than discovering them late. This isn’t just theory –– it’s a truth written in the DNA of manufacturing, born in Toyota’s fabrics looms where a simple innovation changed everything. Their big quality innovation was a loom that could automatically detect a broken thread and stop itself, minimizing waste. This obviously saved a lot of time and materials, because all production that continued after a thread broke literally became waste.

Each meter of ruined fabric represented more than material loss –– it represented faith broken between maker and customer.

Software development carries this same weight. The closer to creation we catch a bug, the faster we understand its cause, the clearer the fix becomes. But there’s something darker here:code built on defective code breeds new species of defects, many lying dormant until the original bug is fixed. Each line of code written atop a hidden defect becomes potential waste, a debt that compounds with every commit.

New and changed functions demand early testing. Unit tests should lead this charge –– they’re possible the moment code exists. But possibility and practicality wage their own quiet war.

Unit tests cannot fully replace application tests because they can’t detect all types of defects. Instead, they reduce the crushing weight of higher-level testing, illuminated paths through the darkness.

The truth about testing timing lives in uncomfortable realities: developers rarely find joy in testing their own code (who wants to search for their own mistakes?), and every hour spent testing is an hour not spent coding.

This is why software teams usually have dedicated professional testers. Not because it’s ideal, but because it’s human.

Please refer to Shifting Left and Shifting Right for more information about the timing of the tests.

The Speed Imperative in Testing

Every developer knows that bittersweet moment –– when fast feedback could have prevented hours of building on broken foundations, but testing itself becomes another deadline pressure point.

A developer receiving quality feedback within hours of releasing new code stands a fighting chance of understanding cause and effect. Wait a couple days? Now they’re building new features on potentially faulty ground, each line of code possibly amplifying hidden defects. (Not much different from a broken thread in a Toyota fabric).

A development team practicing DevOps, releasing to test often––sometimes multiple times per day––isn’t just following best practices. They’re acknowledging a brutal reality: changes must stay small, feedback must stay fast, or the whole system eventually collapses under its own complexity.

This rapid test cycle demands automated release pipelines, automated tests and exploratory testing.

Yes, exploratory testing means manual work. And yes, we should automate everything. But here’s the paradox no one warns you about: building automated tests for new functionality often slows down initial testing. No matter how advanced your tools, automation takes time to create –– time that often feels stolen from immediate testing needs. But once the automated tests are there, they run tens of times faster than manual tests and are available to run whenever, even when there are no testers available.

The recipe of successful timing is very simple but not easy:

  1. You need a CI/CD for Salesforce enabling frequent builds and tests
  2. Quick smoke tests to verify releases aren’t fundamentally broken.
  3. If the smoke tests pass, run all your automated tests while exploring new functionality manually.
  4. As soon as you have provided the exploratory testing feedback to the developers, automate the most relevant parts of your exploratory tests. Particularly, automate all tests that detect defects so that when the next release arrives to test, your automation can verify that the defects were corrected.

When providing “feedback to developers” (our polite term for defect reports), occasionally include praise. Developers rarely get positive feedback, even when they often deserve it.

As we explored in Chapter 5's deep dive into automation ROI, the speed of your testing feedback loop can make or break your development cycle. Fast, reliable test cases aren't just nice to have –– they're essential for maintaining the rapid pace of modern Salesforce development.

The Real World

Now that you have mastered the essential testing techniques, don’t be fooled into thinking you were able to find every defect. A testing phenom that can imagine all those weird and surprising things a real user will do is yet to be born. Therefore, we’ll close this chapter with an old testing joke.

A tester goes to a bar. He orders a beer. He orders minus one beers. He orders zero beers. He orders one million beers. He even orders a kangaroo! No defects found.

A customer walks in the bar. She asks where the restroom is. The whole bar explodes.

Conclusion: Elevate Your Test Automation Strategy

Congratulations on making it to the end of this chapter. I think you will agree there were some deep topics covered in this one.

In the topic of Boundary-Value Analysis covered above, we examined what type of inputs you should use. For manual tests––these are entered by a person––but for automated tests they are provided by the testing engine.

So how do you manage the various data in your automated tests?

Good question. And it’s precisely what our next chapter is all about.

Book a demo

About The Author

SVP Evangelism

I am serial entrepreneur who has worked at 6 startups with 3 successful exits over the past 34 years in the valley.

Chapter 6: Test Case Design
Making DevOps Easier and Faster with AI
Chapter 5: Automated Testing
Reimagining Salesforce Development with Copado's AI-Powered Platform
Planning User Acceptance Testing (UAT): Tips and Tricks for a Smooth and Enjoyable UAT
What is DevOps for Business Applications
Testing End-to-End Salesforce Flows: Web and Mobile Applications
Copado Integrates Powerful AI Solutions into Its Community as It Surpasses the 100,000 Member Milestone
How to get non-technical users onboard with Salesforce UAT testing
DevOps Excellence within Salesforce Ecosystem
Best Practices for AI in Salesforce Testing
6 testing metrics that’ll speed up your Salesforce release velocity (and how to track them)
Chapter 4: Manual Testing Overview
AI Driven Testing for Salesforce
Chapter 3: Testing Fun-damentals
AI-powered Planning for Salesforce Development
Salesforce Deployment: Avoid Common Pitfalls with AI-Powered Release Management
Exploring DevOps for Different Types of Salesforce Clouds
Copado Launches Suite of AI Agents to Transform Business Application Delivery
What’s Special About Testing Salesforce? - Chapter 2
Why Test Salesforce? - Chapter 1
Continuous Integration for Salesforce Development
Comparing Top AI Testing Tools for Salesforce
Avoid Deployment Conflicts with Copado’s Selective Commit Feature: A New Way to Handle Overlapping Changes
Enhancing Salesforce Security with AppOmni and Copado Integration: Insights, Uses and Best Practices
From Learner to Leader: Journey to Copado Champion of the Year
The Future of Salesforce DevOps: Leveraging AI for Efficient Conflict Management
A Guide to Using AI for Salesforce Development Issues
How to Sync Salesforce Environments with Back Promotions
Copado and Wipro Team Up to Transform Salesforce DevOps
DevOps Needs for Operations in China: Salesforce on Alibaba Cloud
What is Salesforce Deployment Automation? How to Use Salesforce Automation Tools
Maximizing Copado's Cooperation with Essential Salesforce Instruments
From Chaos to Clarity: Managing Salesforce Environment Merges and Consolidations
Future Trends in Salesforce DevOps: What Architects Need to Know
Enhancing Customer Service with CopadoGPT Technology
What is Efficient Low Code Deployment?
Copado Launches Test Copilot to Deliver AI-powered Rapid Test Creation
Cloud-Native Testing Automation: A Comprehensive Guide
A Guide to Effective Change Management in Salesforce for DevOps Teams
Building a Scalable Governance Framework for Sustainable Value
Copado Launches Copado Explorer to Simplify and Streamline Testing on Salesforce
Exploring Top Cloud Automation Testing Tools
Master Salesforce DevOps with Copado Robotic Testing
Exploratory Testing vs. Automated Testing: Finding the Right Balance
A Guide to Salesforce Source Control
A Guide to DevOps Branching Strategies
Family Time vs. Mobile App Release Days: Can Test Automation Help Us Have Both?
How to Resolve Salesforce Merge Conflicts: A Guide
Copado Expands Beta Access to CopadoGPT for All Customers, Revolutionizing SaaS DevOps with AI
Is Mobile Test Automation Unnecessarily Hard? A Guide to Simplify Mobile Test Automation
From Silos to Streamlined Development: Tarun’s Tale of DevOps Success
Simplified Scaling: 10 Ways to Grow Your Salesforce Development Practice
What is Salesforce Incident Management?
What Is Automated Salesforce Testing? Choosing the Right Automation Tool for Salesforce
Copado Appoints Seasoned Sales Executive Bob Grewal to Chief Revenue Officer
Business Benefits of DevOps: A Guide
Copado Brings Generative AI to Its DevOps Platform to Improve Software Development for Enterprise SaaS
Copado Celebrates 10 Years of DevOps for Enterprise SaaS Solutions
Celebrating 10 Years of Copado: A Decade of DevOps Evolution and Growth
5 Reasons Why Copado = Less Divorces for Developers
What is DevOps? Build a Successful DevOps Ecosystem with Copado’s Best Practices
Scaling App Development While Meeting Security Standards
5 Data Deploy Features You Don’t Want to Miss
How to Elevate Customer Experiences with Automated Testing
Top 5 Reasons I Choose Copado for Salesforce Development
Getting Started With Value Stream Maps
Copado and nCino Partner to Provide Proven DevOps Tools for Financial Institutions
Unlocking Success with Copado: Mission-Critical Tools for Developers
How Automated Testing Enables DevOps Efficiency
How to Switch from Manual to Automated Testing with Robotic Testing
How to Keep Salesforce Sandboxes in Sync
How Does Copado Solve Release Readiness Roadblocks?
Software Bugs: The Three Causes of Programming Errors
Best Practices to Prevent Merge Conflicts with Copado 1 Platform
Why I Choose Copado Robotic Testing for my Test Automation
How to schedule a Function and Job Template in DevOps: A Step-by-Step Guide
Delivering Quality nCino Experiences with Automated Deployments and Testing
Maximize Your Code Quality, Security and performance with Copado Salesforce Code Analyzer
Best Practices Matter for Accelerated Salesforce Release Management
Upgrade Your Test Automation Game: The Benefits of Switching from Selenium to a More Advanced Platform
Three Takeaways From Copa Community Day
What Is Multi Cloud: Key Use Cases and Benefits for Enterprise Settings
How To Develop A Salesforce Testing Strategy For Your Enterprise
Using Salesforce nCino Architecture for Best Testing Results
Cloud Native Applications: 5 Characteristics to Look for in the Right Tools
5 Steps to Building a Salesforce Center of Excellence for Government Agencies
Salesforce UI testing: Benefits to Staying on Top of Updates
Benefits of UI Test Automation and Why You Should Care
Copado + DataColada: Enabling CI/CD for Developers Across APAC
Types of Salesforce Testing and When To Use Them
What is Salesforce API Testing and It Why Should Be Automated
Machine Learning Models: Adapting Data Patterns With Copado For AI Test Automation
Automated Testing Benefits: The Case For As Little Manual Testing As Possible
Beyond Selenium: Low Code Testing To Maximize Speed and Quality
UI Testing Best Practices: From Implementation to Automation
How Agile Test Automation Helps You Develop Better and Faster
Salesforce Test Cases: Knowing When to Test
DevOps Quality Assurance: Major Pitfalls and Challenges
Top 10 Copado Features for #AwesomeAdmins
Go back to resources
There is no previous posts
Go back to resources
There is no next posts

Explore more about

Salesforce Testing
Articles
December 6, 2024
Making DevOps Easier and Faster with AI
Articles
November 19, 2024
Chapter 5: Automated Testing
Articles
November 18, 2024
Reimagining Salesforce Development with Copado's AI-Powered Platform
Articles
November 8, 2024
Planning User Acceptance Testing (UAT): Tips and Tricks for a Smooth and Enjoyable UAT

Activate AI — Accelerate DevOps

Release Faster, Eliminate Risk, and Enjoy Your Work.
Try Copado Devops.

Resources

Level up your Salesforce DevOps skills with our resource library.

Upcoming Events & Webinars

Learn More

E-Books and Whitepapers

Learn More

Support and Documentation

Learn More

Demo Library

Learn More