Tech

Multiplatform mobile testing: why we moved on from Appium and what we learned

Ditching Appium for native testing frameworks was a game-changer—by taking ownership of UI tests and simplifying our approach, we unlocked faster, more reliable results and finally broke free from the endless cycle of flaky tests and delays.
Share on social media

About five years ago, I was part of a large-scale project where we invested heavily in Appium for UI testing. At the time, it seemed like the perfect solution for automating tests across platforms. The promise of writing once and testing everywhere sounded too good to pass up. But after two years of work, we threw it all away.

That experience wasn’t just a failure - it was a masterclass in what not to do when approaching UI testing. Recently, a fellow CTO friend asked me about using Appium, and it brought me back to those lessons. While the tool may have evolved since then, the challenges we faced and the solutions we found remain relevant today. Here’s what went wrong, what we changed, and how it shaped the way I think about UI testing.

What went wrong with Appium

Our Appium setup wasn’t just a technical problem—it was also a process and ownership issue. The tool, the workflows, and the division of responsibilities all contributed to a system that drained time and resources without delivering the expected results.

External teams maintaining UI tests

One of our biggest mistakes was outsourcing the writing and maintenance of UI tests to external QA teams. Here's why:

  • The gap in ownership: The external teams weren’t the ones changing the application code, so they lacked the context to keep the tests aligned with an evolving codebase.
  • Frequent breakages: Since developers weren’t responsible for maintaining the tests, updating them was never a priority. The cycle of QA teams stabilizing the tests only for them to break again with the next code change became an endless loop.

Challenges with Appium as a tool

Beyond ownership issues, Appium itself presented significant hurdles:

  • Complex setup: Getting Appium configured and integrated into our workflows was no small feat. It required substantial effort just to get started.
  • Flaky and slow tests: Compared to native testing frameworks, Appium tests were slower and more prone to failure, reducing our confidence in them.
  • Developer overhead: Appium demanded modifications to the app (e.g., exposing view IDs) to support the tests. This added extra workload for developers without delivering clear benefits.
  • Platform-specific differences: iOS and Android apps often had slightly different flows or layouts. These differences made it hard to create effective Page Object Models, which are essential for reusable and maintainable test code.
  • Higher-level mocking needed: With Appium, we had to mock at a high level, such as using a fake HTTP server to simulate respo

What we changed

Realizing that our approach wasn’t working, we decided to overhaul how we handled UI testing. This wasn’t just about switching tools—it was about rethinking processes, ownership, and priorities.

Developers took ownership of UI tests

We shifted the responsibility for UI tests from external QA teams to developers. This change had a profound impact. Here's why:

  • Switch to native testing frameworks: We transitioned to native tools like Espresso for Android and XCTest for iOS. These frameworks were faster, more reliable, and better integrated into the development workflow.
  • Developer-driven maintenance: By making developers responsible for UI tests, we ensured that tests stayed up-to-date with code changes. This alignment reduced breakages and significantly improved efficiency.

Redoing the ‘definition of done’

We introduced a new rule: passing UI tests became part of the definition of done for every feature. Here's how:

  • UI tests as a requirement: If the UI tests didn’t pass, the feature wasn’t considered complete.
  • Collaboration with QAs: Developers wrote and maintained the tests, but QA teams worked closely with them to define what needed to be tested. This collaboration ensured that the tests were meaningful and aligned with real user scenarios.

Mocking at the right level

Native frameworks allowed us to mock individual app components with precision, whether it was data layers, network responses, or UI interactions.  Here’s how these benefits translated into better testing outcomes:

  • Granular mocking: Native tools enabled faster feedback loops and more reliable test setups.
  • End-to-end flexibility: We could create a mix of fully mocked scenarios for quick feedback and true end-to-end tests to validate the entire system. By contrast, Appium’s higher-level mocking requirements added complexity without clear benefits.

The results

The changes we implemented significantly transformed our development process:

  • Improved reliability: Using native testing frameworks minimized test flakiness, while developer ownership ensured that tests stayed consistently aligned with the evolving codebase.
  • Cost savings: By removing the constant back-and-forth between developers and external QA teams, we eliminated communication delays and reduced inefficiencies.
  • Faster feedback loops: Tests ran more quickly and integrated seamlessly into our CI/CD pipelines, allowing us to catch issues early and speed up development cycles.
  • Simpler mocking: Native frameworks provided granular and straightforward mocking options, making test setups faster, easier, and more reliable.
  • Ending the stabilization cycle: Shifting test ownership to developers replaced the repetitive stabilization efforts by QA teams with robust, maintainable testing practices.

These changes not only improved the quality of our testing but also streamlined our workflows and boosted overall team productivity.

Lessons learned

Reflecting on our experience, we uncovered key lessons that reshaped our approach to UI testing. These insights helped us transition from a flawed process to one that was faster, more reliable, and scalable. Here’s what we learned:

  1. Ownership is key. Tests are only as good as the people maintaining them. When developers own UI tests, the quality and reliability improve significantly.
  2. Simpler solutions often work better. Cross-platform tools like Appium may seem appealing for their versatility, but in our experience, native solutions were simpler, faster, and far more effective.
  3. Flow consistency matters. If your app has significant platform-specific differences in flow or structure, cross-platform test automation can become a major headache. Native testing frameworks, on the other hand, let you tailor tests to each platform, avoiding these issues.
  4. Mocking at the right level is crucial. Native frameworks allow for precise mocking, whether you’re testing individual components or creating full end-to-end scenarios. With Appium, the higher-level mocking required significantly more effort and introduced additional maintenance challenges.
  5. Stabilization shouldn’t be a repeated effort. Stabilizing tests over and over isn’t sustainable. By moving ownership to developers and using tools designed for each platform, we achieved a more reliable testing process that didn’t require constant fixing.
  6. Collaboration between developers and QAs is essential. QAs bring critical insights into what needs to be tested, while developers ensure that those tests are robust and maintainable. This partnership is invaluable for building reliable UI tests.

These lessons shaped our testing strategy into one that delivers faster feedback, reduces overhead, and improves overall quality. By aligning tools, teams, and processes, we built a foundation for success in multiplatform testing.

Final thoughts

Our experience with Appium didn’t go as planned, but it highlighted a crucial lesson: success in testing isn’t just about the tool—it’s about how you implement and maintain it. For us, switching to native frameworks and adopting a more integrated approach to testing made all the difference.

If you’re evaluating Appium or similar cross-platform tools, take a step back and consider how well they align with your team’s workflow and expertise. In many cases, simpler native solutions can deliver better results, especially when supported by strong developer ownership, precise mocking strategies, and close collaboration between developers and QA teams.

Subscribe to our newsletter

Get insights into all things startup & MVP development.

Subscribe


Read next