Summary of QA Global Summit 22.2 – Junior Track

This is an AI generated summary. There may be inaccuracies.
Summarize another video · Purchase summarize.tech Premium

00:00:00 - 01:00:00

In this section of the QA Global Summit 22.2, the speaker discusses the importance of web accessibility and the need to incorporate Quality Engineers (QEs) into the development process. They emphasize the benefits of involving QEs early on to prevent bugs and ensure high-quality products. They also explain the concepts of lean QA and lean QA infrastructure, which involve leveraging the expertise of developers and implementing processes and guidelines to improve product quality. The speaker emphasizes the iterative nature of the QA process and the role of automation in saving time and effort on manual testing. They also highlight the importance of monitoring key product and quality metrics, as well as the processes of bug bash meetings and quality maturity evaluation. The speaker concludes by discussing the Web Content Accessibility Guidelines (WCAG) and the challenges of implementing them in web applications. They stress the importance of accessibility for people with disabilities and offer tips on how to improve web accessibility.

  • 00:00:00 In this section of the video, the speaker thanks the audience and there is some background music playing. There are also mentions of the word "foreign."
  • 00:05:00 In this section of the QA Global Summit 22.2, the host introduces the event and welcomes the attendees. He expresses his excitement about the large number of participants and the diversity of the audience, which includes QA engineers from various countries such as Spain, South Africa, Ukraine, and Portugal. The event is jointly organized by QA company and is recognized as an important event in the QA industry. The community manager, Julia, is introduced as the driving force behind the event, having assembled 16 speakers for each day, and 32 stickers in total for the two-day event.
  • 00:10:00 In this section, the speaker discusses the philosophy of the QA Global Summit, which aims to search for the best use cases and practices from around the world and share them with everyone, especially those who cannot afford to attend expensive conferences. The summit is available online for juniors to consume for free, while middle-class individuals are encouraged to pay for tickets to support the project. The speaker expresses gratitude to the speakers and program committee for their involvement and mentions the special community price for the next two days. Additionally, they announce a upcoming node.js summit and highlight the new test runner model introduced with node.js 18.
  • 00:15:00 In this section of the QA Global Summit, the speaker invites viewers to join as partners or sponsors for the next summit, or to showcase their products and spread the word about them. They also mention that they have participants from various countries, including Azerbaijan, India, Croatia, Malaysia, Bulgaria, Serbia, Netherlands, Armenia, Saudi Arabia, and more. The speaker emphasizes the importance of building friendships with local communities and offers special benefits for community members. They discuss the schedule for the event, which includes 11 hours of content with 16 speakers and Q&A sessions after each block. The speaker encourages viewers to ask questions and mentions that the recordings of the event will be available, although they recommend joining their platform for a dedicated Q&A tab and general chat. They introduce the first moderator, Stacy Cashmore, who welcomes the audience and acknowledges the global presence of attendees.
  • 00:20:00 In this section, the speaker discusses the concept of embedding QA (Quality Assurance) into the engineering team to improve product quality. They explain that while the responsibility for product quality lies on quality engineers, it is the collective responsibility of developers, product owners, and everyone involved in the project. The speaker introduces the idea of "lean QA," which involves supporting large groups of developers with minimal QA engineers. They emphasize the importance of involving developers in the testing process and providing guidance on how to test effectively. By leveraging the expertise of developers and building centralized infrastructure and processes, companies can achieve faster release cycles, reduce costs, and improve the overall development life cycle.
  • 00:25:00 In this section, the speaker discusses the concept of lean QA infrastructure and the role of quality engineers in ensuring high-quality products. They explain that quality engineers focus on both happy path flows and edge cases that developers may overlook, such as dependencies on browsers or operating systems. The speaker emphasizes the need for alignment across organizations to prioritize the value of quality engineers. They also highlight the importance of educating developers on writing basic automation tests and best practices. The quality department works closely with CI/CD engineers to define quality checks in the pipeline and establish infrastructure for non-functional testing, which may require separate processes. By implementing these processes and guidelines, lean QA can speed up validations and aid developers and the quality department in delivering high-quality products.
  • 00:30:00 In this section, the speaker discusses the post-development phase and how QA engineers leverage automation to save time and effort on manual testing. They also mention the importance of bug bash meetings for identifying bugs, as well as the need for monitoring key product and quality metrics once the product is released. The speaker emphasizes the iterative nature of the QA process and the responsibility of quality engineers to ensure quality is owned by everyone. They also highlight the processes of precept reviews and quality maturity evaluation as important tools for preventing issues and improving the state of quality management.
  • 00:35:00 In this section, the speaker discusses some key processes that Lean QA relies on to ensure high-quality outcomes within an organization. One process is the Safe Trend process, which involves teams reviewing themselves and identifying common problems across multiple teams to prevent future issues. Another process is quality metrics, which rely on numerical data to assess the quality of applications and enable consistent understanding and improvement. By implementing these processes, Lean QA can save costs and efforts, allowing companies to hire more developers and release products faster. Additionally, involving developers in the quality process improves their knowledge and builds their understanding of quality practices. Overall, Lean QA offers advantages such as cost savings, improved developer education, and efficient product release.
  • 00:40:00 In this section, the speaker discusses the benefits of incorporating Quality Engineers (QEs) into the development process. By involving QEs early on, developers can think through the testing of their applications and prevent bugs that would otherwise arise later. This shift towards a lean QA model allows QEs to be leveraged across different projects and teams with minimal impact. The speaker also emphasizes the importance of well-documented processes and quality perspectives, which can serve as a self-serving knowledge repository for engineers. Overall, incorporating QEs in the development process helps save manual effort, ensures high-quality products, and promotes cost-effective use of quality in small and medium-sized companies.
  • 00:45:00 In this section, the speaker discusses the importance of captions for consuming content, especially for people with disabilities. They also highlight how captions are not just for those who are deaf or hard of hearing but are also useful for people who are in situations where they cannot hear the sound, such as on a bus or in a noisy environment. The speaker stresses the need to view accessibility from a broader perspective, as it is not just limited to people with disabilities but is relevant to everyone. The speaker then switches to discussing the importance of accessibility in web apps and how even today, we have some accessibility features implemented without realizing it, but there is still a lot of work to be done to improve the overall accessibility of web apps. The speaker concludes by sharing some tips on how to start testing for web accessibility and how to improve it.
  • 00:50:00 In this section, the speaker explains how they became aware of web accessibility and emphasizes that it is not just for experts in the field. They also discuss the reasons why many clients from the United States are concerned about accessibility, including the existence of laws and potential lawsuits. The speaker shares statistics about the number of people with disabilities worldwide and in Europe, highlighting the importance of considering accessibility when creating web applications. They also mention the European Accessibility Act, which will make implementing WCAG 2.1 mandatory by June 2025. Overall, the speaker urges QA testers, software engineers, and business people to start addressing web accessibility as it is an important and beneficial aspect to consider.
  • 00:55:00 In this section, the speaker discusses the Web Content Accessibility Guidelines (WCAG), which is a set of guidelines to make web content more accessible. WCAG has four principles: perceivable, operable, understandable, and robust. It is divided into three levels of success criteria: single A, double A, and triple A. The aim is to improve accessibility with each new feature or website redesign. However, understanding and implementing these criteria can be challenging, as the official WCAG website lacks clear explanations and often focuses on what not to do rather than providing guidance on what to do. The speaker also emphasizes the importance of making non-text content accessible, such as videos, by including features like captions, audio descriptions, and subtitles. They suggest starting with basic accessibility principles from sources like W3Schools and paying attention to details such as image accessibility.

01:00:00 - 02:00:00

In this section of the video, the speaker discusses the importance of using proper load testing tools for server-side and client-side performance testing. They highlight tools such as JMeter, k6, Artillery, and Bliss meter for load testing, and Apptim for capturing performance metrics during automated tests. The speaker emphasizes the need to integrate these tools into the development pipeline to ensure optimal performance of endpoints. They also mention the importance of collaboration with stakeholders to define performance metrics and requirements for manual testing. Additionally, the speaker recommends considering the organization's requirements and team's maturity when selecting automated testing tools for mobile apps. They suggest categorizing devices into high, mid-range, and low-range based on user base data for comprehensive coverage.

  • 01:00:00 In this section, the speaker discusses the importance of using the ALT attribute in image tags for website accessibility. While developers typically remember to include the "source" attribute, they often forget to add the ALT attribute. The ALT attribute is crucial for SEO purposes, as well as for screen reader users who rely on the attribute to understand the content of images. The speaker emphasizes the need for a descriptive and meaningful ALT attribute to enhance website accessibility. Additionally, the speaker touches on the significance of using proper headings and subheadings for users who rely on screen readers to navigate web content. Overall, the discussion emphasizes the need for web developers to prioritize accessibility and improve their understanding of this aspect in order to enhance the user experience for all.
  • 01:05:00 In this section, the speaker discusses the importance of using headings correctly in web development. They explain how using heading levels properly can enhance accessibility for screen reader users and improve a website's SEO ranking. They suggest testing headings by inspecting the page source or using plugins that provide a clear structure of the heading hierarchy. The speaker also touches on the topic of forms and highlights Facebook's implementation of error states, which effectively indicate mistakes to users. They emphasize the need for multiple indicators, as relying solely on color can exclude individuals with color blindness.
  • 01:10:00 In this section, the speaker discusses the importance of accessibility testing and how it should be incorporated early in the development process. They emphasize the need for distinguishable elements on a webpage, such as buttons, for users who may have disabilities or use screen readers. They also mention the significance of color contrast in ensuring text readability. The speaker gives examples of design considerations to make websites more accessible, such as using clear indicators for clickable elements and providing explanations for unfamiliar icons. Overall, the speaker underscores the importance of addressing accessibility issues from the beginning stages of development.
  • 01:15:00 In this section, the speaker emphasizes the importance of awareness, knowledge, and empathy when it comes to making web apps accessible for people with disabilities. They suggest learning about technical fundamentals such as HTML, SEO, copywriting, and legal obligations. The speaker also provides tips on how to pitch the idea to a boss who may not initially care about accessibility, including mentioning the larger market of people with disabilities and the potential revenue increase from inclusive websites. They also suggest highlighting the boost in corporate social responsibility and improvements in copywriting and user experience. Finally, the speaker offers their contact information for further feedback and concludes the talk.
  • 01:20:00 In this section, the speaker recounts a situation where their app received complaints from users after it was released. Users reported issues such as high memory consumption, slow loading times, crashes, and battery drain. The speaker emphasizes the importance of conducting client-side performance tests to identify and fix these issues. They highlight the challenges they faced in debugging and testing, but ultimately managed to release a successful version by incorporating performance testing earlier in the development life cycle. The speaker introduces themselves as Nitin, an engineering manager at Free Malaysia, and expresses their passion for testing and building a Community Driven space for testers. They then provide a brief overview of mobile apps, explaining the three basic types: native apps, web apps, and hybrid apps. They discuss the pros and cons of each type, highlighting factors such as performance, maintenance costs, and device space consumption.
  • 01:25:00 In this section, the speaker discusses the importance of mobile app performance and how it can impact user experience. They explain that a negative experience with an app can significantly affect the relationship between the user and the company. Factors such as app load time, search result speed, and overall performance greatly contribute to user satisfaction. The speaker highlights five main objectives for evaluating mobile app performance: performance during heavy workloads, hardware usage, capacity, protocol level performance, and performance under critical conditions. They emphasize the need to align test objectives with business requirements and end user expectations, and stress the importance of collaboration with developers and stakeholders to prioritize performance testing scenarios. Additionally, considering factors like latency and bandwidth is crucial for creating a realistic testing environment.
  • 01:30:00 In this section of the video, the speaker discusses the factors that impact client-side performance: backend, network, and the behavior of the app itself. For backend performance, factors such as response time, number of API calls, and server downtime need to be considered. The network performance of the app can vary depending on the network speed, so factors like latency and bandwidth are important to measure. Testing client-side performance helps understand how the app behaves on a device and how it uses shared resources with other apps. Metrics to consider include device resource usage, rendering errors, and response time. The speaker also emphasizes the importance of proper SLAs for all these metrics.
  • 01:35:00 In this section, the speaker discusses the importance of detecting various performance issues in mobile apps such as memory leaks, screen facing, crashes, startup and transactional issues. They explain that when a mobile app doesn't meet user expectations, there is a high chance users will abandon it, resulting in potential revenue loss. With the advent of 5G, more personalized experiences and features will be available, making app performance even more crucial. The speaker provides tips and tricks for optimizing app performance, including caching images, compressing and resizing images, reusing data templates, reducing HTTP requests, creating a perception of faster loading, loading data as needed, providing an offline mode, using the right tools for performance tuning, and implementing an application performance monitoring system.
  • 01:40:00 In this section, the speaker discusses the importance of diagnosing and monitoring the performance of applications to ensure a good user experience. They recommend using performance monitoring tools like New Relic and AppDynamics to track both user experience metrics and computational resource usage. The speaker also emphasizes the need to test mobile app performance on real devices to accurately measure performance and avoid damage to revenue. They stress the importance of including performance testing early in the development lifecycle and considering performance as a functional requirement rather than a non-functional one. Overall, the speaker emphasizes the importance of collecting metrics and using the right tools to achieve the desired business results and recommends further resources for performance testing.
  • 01:45:00 In this section, the speaker discusses the importance of having proper alert systems in place to ensure smooth production and highlights the need to consider customers' hardware when testing applications. They also mention that the increasing availability of 5G has an impact on performance testing for backend systems, as users are likely to request more data with faster connectivity. Additionally, they emphasize the need for seamless server-client interaction and rendering to provide a smooth experience, especially when exploring AR/VR capabilities. The speaker also mentions the importance of defining proper SLAs and suggests using tools such as ADI Comprehensive Accessibility Tool for figma and Meta SEO Inspector for SEO checks.
  • 01:50:00 In this section, the speaker discusses the importance of various elements for website accessibility and offers recommendations for tools that can be used to ensure accessibility. They highlight the significance of attributes like heading structure and meta tags, as well as the use of free tools such as Access Tools for auditing and improving accessibility. They also mention the potential use of paid tools like MathJax and Axitos. The speaker mentions having used Access Tools on their website and finding it helpful for identifying easy fixes. Additionally, the conversation touches on the use of tools like Playwright and Cypress for testing accessibility during website development. The speaker expresses interest in exploring these tools further and integrating them into their testing pipeline. Another speaker mentions that simplicity in design is important for accessibility, even if it means forgoing some flashy elements. Lastly, the discussion touches on the future plans of integrating automated testing tools into the pipeline for performance testing.
  • 01:55:00 In this section, the speaker discusses various tools that can be used for server-side and client-side load testing, such as JMeter, k6, Artillery, and Bliss meter. They emphasize the importance of integrating these tools into the development pipeline to ensure that endpoints are performing well. They also mention the use of Apptim as a tool for capturing performance metrics during automated tests. In terms of setting benchmarks for manual testing, the speaker highlights the need for collaboration with stakeholders to define the metrics and performance requirements. When it comes to automated testing tools for mobile apps, they recommend Catalon Studio and Apptim, but stress the importance of considering the organization's requirements and the team's maturity when selecting a tool. Regarding testing on real hardware, the speaker suggests categorizing devices into high, mid-range, and low-range categories based on user base data to ensure comprehensive coverage.

02:00:00 - 03:00:00

In this section of the QA Global Summit 22.2 – Junior Track video, the speakers discuss various topics related to testing and automation. They cover categorizing devices for testing purposes, the importance of stable environments for automated tests, the use of Wiremock for API testing, and the benefits of using Cyprus as an automation tool. The speakers emphasize the need for strategic approaches to automation and the value of learning the fundamentals of HTML, CSS, and other programming languages for QA testers. They also announce upcoming events and express their gratitude to the audience.

  • 02:00:00 In this section, the speaker discusses the concept of categorizing devices for testing purposes. They suggest that instead of testing every individual device, one can define a category and select one device from that category. By collecting data on user devices and building components or categories based on that data, testers can optimize their testing approach. They also mention the possibility of randomizing the device selection in order to gain a better understanding of the system test progress. The speaker further adds that automation and integration with pipelines is possible for mobile device tests, particularly functional tests. They emphasize the need for scalability and distributed testing to ensure that tests can be completed within a reasonable amount of time. Additionally, the importance of learning the fundamentals of HTML and CSS for QA testers is highlighted, as it allows for better understanding of how websites work and enables the identification of accessibility issues. Reading documentation and collaborating with developers is also recommended.
  • 02:05:00 In this section of the video, the speaker discusses approaches for debugging performance issues affects the app and their impact on application performance. They highlight the importance of application performance monitoring (APM) tools in debugging and root cause analysis. The speaker also discusses the benefits of using automated testing to enhance their work as manual testers. They believe that automation is a support system for testers to perform testing in a smarter way and closer to their hearts.
  • 02:10:00 In this section, the speakers discuss the importance of focusing on a specific area of automation and understanding the concepts and basics before building on top of that knowledge. They caution against automating everything without knowing what problem you are trying to solve, as it may not lead to a good return on investment. They also emphasize the need to separate different components, such as learning about Java and Selenium separately before attempting automation. Additionally, they mention the significance of understanding the purpose of automation and using frameworks to simplify the process. Overall, the speakers stress the importance of having a strategic approach to automation, rather than diving in without proper knowledge and planning.
  • 02:15:00 In this section of the QA Global Summit 22.2 – Junior Track video, the speaker expresses gratitude for the international participants, and there is accompanying music in the background.
  • 02:20:00 In this section, the speaker announces three upcoming events: React Global Online Summit, Software Architecture Summit, and Data Science Columbus Summit. They provide a promo code for a 20% discount and encourage the audience to check out these events. Then, the next moderator and speaker, Priesh, is introduced and welcomed to the stage.
  • 02:25:00 In this section, the first speaker, Killian Jimenez, discusses the importance of stable environments for automated tests. He compares automated tests to pizza, with QA professionals being the delivery guys who provide the tests to the team. However, when tests are executed in unstable environments, such as low performance or services being down, they can fail for reasons unrelated to the code itself. This can lead to false positives and unreliable test results. Jimenez emphasizes the need for stable environments to ensure that the tests accurately reflect the status of the code.
  • 02:30:00 In this section, the speaker discusses the negative impact of testing in unstable environments. They explain that it wastes both the developer team's time and their own time, as regression test failures may not be due to bugs but rather environment issues. This leads to a loss of valuable feedback and uncertainty, often resulting in manual testing instead of automation. The speaker introduces WireMock as a solution, which is a mock server that can replace the unstable environment and provide reliable responses for UI automation tests. WireMock also offers features such as recording and playback of interactions with other APIs, making it a valuable tool compared to other mock solutions.
  • 02:35:00 In this section of the video, the speaker discusses the use of Wiremock, an application that can be used across various client applications like Android, iOS, and web. Wiremock acts as a proxy and records requests sent to the client application, generating mock responses from the API. The speaker provides an example scenario using an Android client application, an express.js API, and Wiremock. The communication setup involves the Android application communicating with Wiremock, which then proxies the request to the API and generates mock responses. The speaker also introduces the code for the agents involved and shows the testing framework in action, with successful tests executed. Finally, the speaker demonstrates setting up Wiremock as a standalone server using a command in the terminal.
  • 02:40:00 In this section, the speaker discusses the use of Wiremock for API testing. Wiremock is a tool that offers various features to help developers test their APIs. The speaker mentions that Wiremock can provide a log of every action that it records and will provide this information through the terminal. Additionally, Wiremock can be set up to act as a proxy for all communications to a particular IP address and port. In this case, Wiremock is set up to redirect all communication to the API. The speaker also talks about record mappings, which can be used to record all responses generated by the API. The speaker then demonstrates how Wiremock can be used to test a standalone process, and how it can redirect all traffic to the API and record all responses. The speaker also discusses the negative scenario, where an error message is returned so the necessary information can be recorded for analysis. Finally, the speaker mentions the possibility of making changes to JSON files allow for automation of the process.
  • 02:45:00 In this section, the speaker discusses automating Maps (mappings) using Wiremock, a tool for creating test-ready mock services. He highlights the importance of having the correct Wiremock dependency, and the need to specify which mappings and files to use in the wiremock server. He explains how to create a new instance of Wiremock, how to close every single Wiremock instance after testing, and how to use positive and negative scenarios to test the application's login user case. He also discusses the useful feature that Wiremock supports, which is setting different things for the Stop, such as recording everything, adapting or stops as needed. To achieve this, he recommends using the documentation provided by a specific library.
  • 02:50:00 In this section of the QA Global Summit 22.2 - Junior Track, the presenter discusses how to avoid using multiple JSON files in WireMock by setting all the necessary steps inside the main code using a specific Builder pattern. They also highlight how to manually create and configure WireMock during the test execution withoutcls using JSON files. This method provides an easy way to extract specific steps for automation testing and to reproduce every action or interaction in the application client. WireMock is a powerful tool for testing and automation, which can support multiple testing environments, automation testing, and manual processes.
  • 02:55:00 In this section, the speaker discusses the use of different tools for testing, including wiremock for UI automation tests, which is considered more reliable than working with an unstable environment. They conclude their presentation and express their gratitude to the audience. The next speaker, Ahmad Assad, shares his experience in QE automation and explains why he chose to use Cyprus as an automation tool. Ahmad talks about the need for new automation frameworks and the pros and cons of Cyprus. They emphasize the importance of learning through examples and provide an overview of the QA and automation journey.

03:00:00 - 04:00:00

During the Q&A Global Summit 22.2 - Junior Track, several speakers discuss different aspects of software testing and automation. One speaker focuses on the evolution of automation tools, highlighting how Selenium has been dominant for many years but newer tools like Cypress, Protractor, and Playwright have emerged. They emphasize the advantages of using these more evolved tools with asynchronous frameworks, emphasizing the freedom and capabilities they offer. Another speaker dives deep into Cypress as a testing tool, showcasing its features and benefits, and providing a demo on how to use it for end-to-end testing. Another topic covered is the concept of autonomous testing, with speakers emphasizing the need for automation and ways to integrate AI and machine learning into testing processes. They discuss challenges and present their own autonomous testing tools and platforms. Overall, the speakers convey their passion for automation and the potential of autonomous testing in improving testing efficiency.

  • 03:00:00 In this section, the speaker discusses the evolution of automation tools in the field of software testing. They mention how Selenium has been the dominant tool for many years, but new tools like Cypress, Protractor, and Playwright have emerged in recent years. The speaker highlights the advantages of using more evolved and asynchronous frameworks, emphasizing the freedom and capabilities they offer. They also express their excitement about the continuous updates and improvements in Cypress, noting how the tool is gaining popularity within the testing community. The speaker encourages learners to explore Cypress and take advantage of the current opportunity to learn and master it.
  • 03:05:00 In this section of the QA Global Summit 22.2 - Junior Track, the speaker discusses the features and benefits of Cyprus, a new JavaScript testing tool. Cyprus is described as easy and powerful, with features such as capture the domain snapshot, nature of the framework, automatic waiting, and supply stop clocks included, allowing real-time execution and easy integration with popular cloud providers. The demo shows how to get started with Cypress, making it easy to choose a browser, and controlling everything through a dashboard.
  • 03:10:00 In this section of the video, the speaker discusses how to use Cypress for end-to-end testing of web applications. They explain how to set up tests and run them from Cypress, and describe how easy it is to write tests using Cypress. They also give an example of how to use Cypress to login to a website and retrieve a username. The speaker uses simple language and provides clear examples to make the topic easy to understand for developers familiar with JavaScript frameworks.
  • 03:15:00 In this section, the speaker demonstrates a time-traveling feature in testing where they can go back and see what happened during the test. They show an example of getting a username and then trying to get a password. They use the "contain" method to find a login button and debug any issues that arise. While initially experiencing some difficulty, they eventually find a solution and continue with the test. The speaker highlights how this time-traveling capability saves a lot of time in identifying and solving problems during testing.
  • 03:20:00 In this section, the speaker discusses how easy it is to learn and use Cypress for testing. They demonstrate the process of performing actions, such as typing and clicking on elements, and how to assert expected outcomes. They emphasize the simplicity of using Cypress by referencing the helpful documentation and how one can easily find answers through online searches. They also mention that learning through practical examples and demos is a better approach than reading the entire documentation. Overall, they convey that Cypress is a user-friendly tool that can be easily learned and utilized for testing purposes.
  • 03:25:00 In this section, the speaker, Marshall, introduces himself as a test analyst turned test manager with experience in core banking projects. He is also a former vice president of the Czech and Slovak software testing board. Marshall discusses his passion for trendy topics such as automation, DevOps, testing in Scrum, AI, machine learning, and autonomous testing. He then proceeds to talk about his fascination with autonomous testing and how he believes it is the next stage of software testing. Marshall outlines the four steps he will cover: what he thinks is wrong with testing, how autonomous testing works and its benefits, how to build an autonomous testing bot, and his own experience and results in implementing autonomous testing. He also briefly mentions the company he works for, tesina, which provides software testing services and training. Overall, Marshall sets the stage for his talk on autonomous testing and the development of a testing bot.
  • 03:30:00 In this section, the speaker discusses their dissatisfaction with manual testing and their belief that test automation could solve the problem of delays. They explain the process of manual testing and the challenges it presents, such as spending a lot of effort on test execution and not having enough time due to late functionality delivery. They then introduce the idea of automating test execution, which brings about its own set of challenges, including maintenance and script programming. The speaker highlights the difficulties of maintaining automation scripts and the lack of capacity and skills in traditional testing teams. They then mention the concept of autonomous testing, which aims to remove human intervention from testing, and express interest in exploring this approach further to increase speed, reduce technical complexity, and save effort.
  • 03:35:00 In this section, the speaker discusses the approach of removing humans from testing and relying solely on AI and machine learning. While this approach may have benefits such as speeding up testing and lowering maintenance costs, it is important to note that it is not a complete replacement for human involvement. The speaker emphasizes that test automation should bring these benefits but also acknowledges that test autonomy is just better automation. The speaker then explores two approaches: an evolutionary approach that focuses on improving existing automation processes and a revolutionary approach that considers entirely new methods. The speaker also examines how autonomous testing relates to test automation and proposes mapping new activities into the traditional testing approach. Key features introduced for autonomous testing include self-healing for maintenance and deviation classification for result evaluation.
  • 03:40:00 In this section, the speaker discusses the concept of autonomous testing and the need to integrate various features to improve automation. They mention that many companies are working on building autonomous testing tools and highlight the importance of automating assertions, building object models automatically, generating test cases from documentation, and other similar features. The speaker emphasizes the need to cover all the gaps in automation and presents their own tool called "vopi," which is an autonomous testing platform. They believe that it is easier to build an autonomous tool and integrate automation features, rather than trying to evolve existing automation features into autonomous testing tools.
  • 03:45:00 In this section, the speaker explains how their platform can integrate with existing test cases by setting up a config file, without the need for manual scripting. They also introduce their bot, which crawls the application, collects data, and prepares a model to test based on that model. They use visual assertion and visual testing to validate the application's appearance and functionality, and are building their own solution with machine learning functionality. The speaker demonstrates the platform's capabilities through a quick video, showing how test cases are autonomously generated and results are collected in a front-end interface.
  • 03:50:00 In this section, the speaker discusses the challenges they are facing in building an autonomous regression testing platform. They highlight the lack of a single integrated end-to-end testing tool that provides autonomous capabilities, requiring them to integrate existing solutions or build from scratch. They also mention that technical skills are still needed despite the goal of removing technical complexity, and introduce new metrics which some people are skeptical about. The speaker expresses the difficulty of managing expectations, with some dismissing the idea while others have extremely high expectations, such as replacing the entire testing team. However, they believe they are ready to face these challenges.
  • 03:55:00 In this section, the speaker discusses the potential for solving challenges in testing through the use of automation and autonomous testing approaches. They suggest that implementing new tools and pushing test automation further could remove tedious manual tasks from daily testing activities. The speaker emphasizes the importance of introducing more automation, not just in test execution, but also in other teams. They mention that autonomous testing is already applicable in many cases, but caution against over-promising what it can achieve. The speaker expresses their enthusiasm for autonomous testing and invites viewers to reach out for more information or to share their own experiences with applying autonomous testing.

04:00:00 - 05:00:00

In this video, the speakers discuss various topics related to QA and testing. They cover the benefits of using WireMock for API testing, the potential for autonomous testing in UI and API testing, best stress frameworks for QA automation, alternative mock services, and the future of QA with AI and ML. They also touch on topics such as manual testing with WireMock, selecting elements without unique identifiers, and starting with automation tools that allow quick involvement in the job. Additionally, they mention the challenges and advantages of integrating the page object model for multiple views and provide an update on the progress of their tool's development. Overall, the video provides valuable insights into different aspects of QA and testing.

  • 04:00:00 In this section, the speaker discusses the benefits of using WireMock for API testing. They explain that WireMock can automatically proxy requests and record them, eliminating the need to search for documentation on how the APIs work. This capability is particularly useful when working with developers who may not provide complete or up-to-date documentation. The speaker also highlights that WireMock allows testers to work independently and generate mappings and steps for automation. This sets WireMock apart from other mock services and makes it a valuable tool for QA professionals.
  • 04:05:00 In this section, Marcel explains that while their tool, Whoopi, is visually based and primarily focused on UI testing, similar autonomous approaches can be applied to API testing. He mentions that security testing, for example, already uses autonomous checks and bots. However, he clarifies that their current tool is not designed for API testing, but they are open to exploring it in the future. Marcel also discusses their efforts to improve the tool through machine learning, specifically using supervised learning to build a model based on screenshots and manual intervention to improve the model based on failures. He emphasizes that they are still in the early stages of development but believes that this approach has potential.
  • 04:10:00 In this section, the speakers answer a question about using WireMock for manual testing and for apps with high security requirements. They explain that WireMock can be executed locally on the tester's machine or deployed as a standalone process. In the case of web testing, access to specific environments is necessary, while for mobile testing, coordination with the development team is required. They also discuss the challenge of handling security in build generation and emphasize the importance of ensuring that these mock builds are not used in production. They conclude that WireMock has the potential to be used effectively by manual testers in various ways. In another question, they address the issue of selecting elements without unique classes or IDs. They suggest adding unique identifiers to the code as a best practice. However, if that is not possible, they recommend using smart selection methods, such as finding a parent element with a unique ID and then searching within it. They advise against using "contains" and assure that it would not significantly impact automation running time.
  • 04:15:00 In this section of the video, the speaker discusses the best stress framework to start with for QA automation. They mention that the choice depends on the market and what you are looking for. If you are targeting startups in Berlin, the speaker suggests starting with Cypress as it allows you to quickly get involved in the career and learn by practice and example. However, they caution that while Cypress is easy and simplified, it may not provide the big picture, unlike starting with Java and Selenium. Ultimately, the speaker suggests looking at the market and choosing an automation tool that allows you to hop into the job quickly and learn by doing. The speaker also explains the difference between automation and autonomous testing, highlighting that autonomous testing aims to automate the entire testing process, including test data creation and report assessment. It goes beyond simply automating test execution and seeks to optimize and remove human interaction as much as possible. The speaker emphasizes that autonomous testing is the next step in automation and offers a more advanced approach to achieving higher levels of automation.
  • 04:20:00 In this section of the video, the speaker discusses alternatives to WireMock for mocking services. They mention other mock services like Mojito and Moco that developers can use in their code for automation. Additionally, they mention Cyprus as a tool with a proper way to mock tests. The speaker acknowledges that starting with mock services can be difficult, but finds WireMock to be a useful starting point due to its features like proxy recording mappings. They also mention that if using the same test scripts for web and mobile is not an option, a separate approach should be taken for mobile testing. For integrating the page object model for multiple views, the speaker suggests considering the project size and the extent of duplication before deciding to use it. As for feedback from current users, the speaker mentions that they have just started their first project and don't have specific details yet.
  • 04:25:00 In this section, it is discussed that even though a company provides a solution with just a credit card, providing URL, and little else, it still requires effort to make it successful. However, the speaker believes that next year they will be ready to open it up for first early adopters. The pilot project will be restricted to only five companies this year, and the first learning from this project is that there are technical challenges that must be addressed. The speaker thanks Marshall for his contribution to the discussions and encourages him to reach out if he has any questions. The conference with its excellent presentations and demos will continue talking, and the speaker will discuss IoT and performance testing tomorrow.
  • 04:30:00 In this section, the host introduces himself as Nitin from Malaysia and mentions that he will be hosting the segment for a couple of hours. He then introduces the first speaker, Himanshu Chauhan, who will be discussing how to transform QA journeys with AI and ML. Himanshu is described as a seasoned quality engineering expert who has expertise in modern test automation practices.
  • 04:35:00 In this section, the speaker, Himanshu, introduces himself as a software engineering professional and discusses the topic of the future of quality engineering with AI and ML. He mentions how testing in the past used to be manual but has evolved with the introduction of tools like Selenium and WebDriver. He emphasizes the need for further innovation in the QA field and plans to share use cases and ideas to inspire the audience's testing practices.
  • 04:40:00 In this section, the speaker discusses the evolution of testing processes in the QA field, comparing it to the advancements in F1 race tire changes. Initially, testing took days or months, but with the introduction of automation tools like Selenium, the time decreased to a few hours. The speaker also mentions new practices such as the test pyramid approach and shift left approach that can further reduce testing time and improve product quality. To take testing to the next level, the speaker suggests implementing machine learning or AI in the QA area. They provide an overview of machine learning, deep learning, and artificial intelligence, explaining how they can be utilized in different use cases. The speaker emphasizes that it took several decades for these technologies to become mainstream due to the lack of proper data.
  • 04:45:00 In this section, the speaker discusses the industries that are adopting artificial intelligence and machine learning technologies, such as finance, healthcare, and security. They also mention the future investment in AI and ML, with the banking and financial sector expecting a $25 billion investment by 2024. The speaker then moves on to discuss the evolution of QA, mentioning the transition from manual testing to automated regression testing and codeless automation. They also highlight the upcoming trend of AI/ML-based hyper automation, where automation scripts can be generated automatically and analyzed using AI/ML algorithms. This approach will make it easier to create test scripts, execute them, and analyze failures in a more efficient manner.
  • 04:50:00 In this section, the speaker discusses the different phases of automation testing. The first phase involves exploratory testing, with a mix of manual and automation efforts. The second phase focuses on non-functional testing, using tools like JMeter, Gatling, and others for performance and load testing. The third phase is codeless automation, where BDD (Behavior-Driven Development) and TDD (Test-Driven Development) are used to automate regression and non-functional tests. Finally, the speaker mentions the fourth phase, AIML-based hyper automation, where everything can be automated with a click of a button, including logging defects in the tracking tool. They mention tools like Coded UI and Ready API for automating requirements and APIs.
  • 04:55:00 In this section, the speaker discusses three new advancements in QA: automated script healing, automated result analysis, and automated defect login. When it comes to automated script healing, the idea is to use AI and ML approaches to swap element IDs and use alternate element IDs to ensure scripts pass even when element IDs change. Automated result analysis involves using tools like Report Portal to automatically classify and categorize result failures based on learning models implemented through AIML. Finally, automated defect login can be achieved by leveraging Jira's APIs to link with reporting and execution tools, enabling automatic logging of failures and defects. The speaker then transitions to discussing different categories of AIML-based tools in the QA domain, such as differential tools, visual tools, declarative tools, self-healing tools, and reporting and analysis tools. Some examples of these tools include Spider AI for creating UI scripts and tools for visual UI testing to verify color combinations and alignment after a release.

05:00:00 - 06:00:00

In this section of the QA Global Summit, the speaker discusses the benefits of implementing AI techniques in software testing. They highlight the expedited timelines, error detection capabilities, and growth opportunities that AI brings to the software testing world. The speaker also emphasizes the importance of understanding the basics of AI and provides examples of its applications in various industries. They conclude by stressing the need for businesses to adopt AI to stay competitive and shape the future of their operations.

  • 05:00:00 In this section, the speaker discusses different tools that can be used for testing, including visual testing tools for applications focused on visually impaired users. They mention tools that use AIML algorithms to compare previous and current versions of an application, as well as tools that use natural language processing and DSL to enable functional testing automation. They also mention self-healing tools that help with element locator failures and tools for reporting and analysis, including classification of defects. Finally, they briefly mention the benefits of AIML-based testing, emphasizing its multi-fold benefits to different industries and products.
  • 05:05:00 In this section, the speaker discusses the benefits of using AI and ML in QA testing, such as faster testing, early detection of issues, increased release confidence, and wider test scope. They mention that building tools from scratch using machine learning algorithms and models can be divided into three categories: supervised, unsupervised, and reinforcement algorithms. They provide examples of how Google uses unsupervised learning to suggest search results based on user data. They also recommend libraries like TensorFlow, PyTorch, and scikit-learn for starting to learn machine learning. The speaker concludes by demonstrating a simple example of integrating Alexa with Jira's API for getting daily reports.
  • 05:10:00 In this section, the next speaker, Joanne Philippi, introduces herself as an industrial engineer with experience in machine learning and artificial intelligence (AI) at Amazon. She expresses her excitement about discussing AI as the future of QA and acknowledges Lisa Brenis as her inspiration for the talk. Joanne mentions that AI is already present in QA and emphasizes the importance of understanding its basics. She highlights that we are currently experiencing unimaginable advancements and that being part of this moment in history is beneficial for humanity. Joanne then shares a video she made using an AI app called Scientisia, demonstrating the replication of human interactions through AI. She divides her talk into three main topics: what AI is, the rudiments of AI, and the replication of human interactions through AI.
  • 05:15:00 In this section, the speaker addresses the concern about whether machines will eventually surpass humans and rule the world. They explain that AI is essentially a tool created by humans to solve problems quickly. They highlight the importance of failing quickly and how AI helps in problem-solving. The speaker emphasizes that the future lies in humans and AI coexisting together, rather than AI taking over completely. They provide examples from science fiction movies and introduce Sophia, an advanced humanoid robot, to illustrate the capabilities and limitations of AI. Overall, AI is seen as a combination of engineering, technology, and human input, serving as a tool to enhance problem-solving abilities.
  • 05:20:00 In this section of the video, the speaker discusses different branches of AI, including linguistic annotation, data collection and annotation, and data validation and relevance. Linguistic annotation refers to the tagging of language data in text or spoken form. Data collection and annotation are important for AI as data is the key component for its functioning. Lastly, data validation and relevance improve the quality of data used by predictive models. The speaker also highlights the impact of AI, such as the popularity of Amazon's Alexa virtual assistant with over 40 million users in the US alone. The speaker emphasizes that while AI is a recent trend, humans have been living with it for decades without realizing it.
  • 05:25:00 In this section, the speaker discusses the growing familiarity of the Amazon Alexa voice assistant among U.S citizens, as well as the complexity of understanding how AI works and how it could help in daily life and different industries. The speaker compares AI to the human brain, specifically the way humans make decisions, which makes AI systems smarter, autonomous, and capable of identifying patterns and data. The speaker mentions the three main topics of AI: artificial intelligence, machine learning, and deep learning, and how they try to replicate a human brain. The speaker then moves on to the topic of AI in QA testing, specifically the benefits of implementing AI in the software testing world. The speaker highlights three main benefits: expedited timelines, detection of errors, and the ability to identify growth opportunities. With AI, software projects and applications can be programmed for failure and set to fail as soon as possible, which allows for identifying areas of growth and change. These benefits of AI in QA testing make it helpful for developers and testers in the software testing world.
  • 05:30:00 In this section, the speaker discusses the three main ways in which AI can benefit the field of software testing. Firstly, AI can help companies analyze data to determine their success in the market. Secondly, AI can expand the knowledge and capabilities of testers, allowing them to delve into business intelligence, linguistic programming, math optimization, and algorithmic analysis. Lastly, AI-driven QA test tools offer various features such as maximum code coverage, intelligent error identification, faster decision-making, simplified testing, and reduced costs. Overall, implementing AI in testing can help reduce waste, improve efficiency, and shape the future of business.
  • 05:35:00 In this section, the speaker emphasizes the importance of businesses adopting AI and technology for their future success. They highlight the potential of AI to revolutionize business operations and mention the key role of data in driving strategies and delivering results. The speaker also discusses common uses of AI in various industries, such as customer relationship management and cybersecurity. They stress the need for businesses to understand human behavior and thoughts in order to effectively leverage AI as a complementary tool. Additionally, the speaker announces a free ticket giveaway for the senior track and concludes by thanking the audience and encouraging them to ask questions. The next speaker is introduced as Sham Sundar, who will be discussing AI techniques to improve software testing.
  • 05:40:00 In this section, the speaker introduces the topic of AI techniques to improve software testing and explains the concept of artificial intelligence. They highlight the growing importance of testing in the software industry and discuss the various types of testing that exist. The speaker also touches on the process of diagnosing bugs and the challenges involved. They mention that their presentation will focus on the inbuilt techniques within AI and delve into the algorithms and macros that make up artificial intelligence.
  • 05:45:00 In this section, the speaker talks about the complexities of debugging and finding root causes in software development. They explain that sometimes reproducing a bug is difficult, and bugs may only appear in special cases or environments. This is where AI comes into play, as it can help identify what led to the defect and assist in planning better tests and fixing the bug. The speaker introduces the concept of TDP (Test, Diagnose, and Plan) and explains that AI can generate possible diagnoses for bugs, which can then be passed on to the programmer for fixing. They also discuss the key concepts of AI techniques, including model-based diagnosis and the challenges of modeling software behavior.
  • 05:50:00 In this section, the speaker discusses the concept of planning in software diagnosis using AI techniques. They explain that in order to find the best diagnosis with minimal tester actions, it is important to plan a sequence of tests. They emphasize the use of high probability, low cost, and entropy-based planning techniques. The speaker highlights the need to establish the probability of functions with the lowest cost involved and to choose tests with the lowest entropy to achieve the highest information gain. They also mention that in planning under uncertainty, the outcome of tests is unknown.
  • 05:55:00 In this section, the speaker discusses the concept of Markov decision processes (MDP) in the context of test planning and diagnostics. The MDP takes into account the parameters of states, actions, transitions, and rewards to minimize the expected cost of testing. The speaker presents preliminary experimental results showing the effectiveness of MDP compared to other techniques such as LC, entropy, and HP. They emphasize the importance of integrating AI-based systems with existing tools and components in order to achieve optimal results. The speaker also highlights the benefits of using AI in testing, including improved bug diagnosis and the generation of predictive analytics for testing teams throughout the project life cycle.

06:00:00 - 07:00:00

In this section of the QA Global Summit, the speakers discuss the implementation of AI in testing and highlight the importance of testing AI models before using them for testing purposes. They also emphasize the need to unlearn certain things and be adaptable in order to embrace new technologies. The speakers share various tools and approaches for evaluating AI-driven tools and discuss the benefits of using AI in mobile testing. They also address the challenges of implementing AI in QA, including the need for accurate data and ongoing learning. Finally, they announce an upcoming event and discuss the trends and benefits of no code automation tools. They explore different approaches to automation testing and highlight the advantages and limitations of each approach.

  • 06:00:00 In this section, the speaker discusses the various metrics that can be captured through asset analytics and staff analytics, such as customer analytics and delivery analytics. They also mention the benefits of using AI testing, including early completion of the software testing lifecycle, better coverage, and the creation of reusable prediction models. The speaker shares their experience with different AI testing tools and emphasizes the importance of conducting a POC (proof of concept) to determine the best tool for specific requirements. They also mention two case studies where they found success using eggplant AI testing tool in healthcare implementations. Finally, the speaker highlights the amalgamation of AI in test creation, execution, and data analysis, which eliminates the need for manual test case updates and enhances defect identification. Overall, the speaker believes that AI testing has a promising future and has received positive feedback for its implementation.
  • 06:05:00 In this section, the speaker discusses the usage of AI in testing and emphasizes the need to test AI models before using them for testing purposes. They recommend using testing tools that support AI functions or are linked with AI results. They suggest writing unit tests for AI models and ensuring that they are behaving correctly to achieve the desired outcome. The speaker emphasizes the importance of quality in AI products and highlights the need for scientific analysis and diagnosis before blindly implementing AI in testing.
  • 06:10:00 In this section, the speakers discuss the criteria and strategies for evaluating AI-driven tools. They mention that most tools in the market are paid, but there are also open source options available. They recommend starting with free tools like report portals and exploring different libraries and learning models provided by companies like Google and Amazon. The speakers also mention the importance of considering the project needs and requirements when selecting a tool. Additionally, they discuss the mindset shift required to embrace AI and ML, emphasizing the need to unlearn certain things in order to move forward with the rise of AI. They highlight the importance of adaptability and being open to change.
  • 06:15:00 In this section, the speakers discuss the importance of learning and unlearning in order to successfully embrace new technologies and tools. They mention the need to adapt and be agile in response to market changes. They highlight the significance of gathering information, training, and staying updated with the latest software in order to help implement these solutions. It is important to consider project requirements and company needs when choosing the appropriate tools and approaches.
  • 06:20:00 In this section, the speakers discuss various solutions focused on testing from a business standpoint, supported by AI. They mention tools like visual UI testing, accessibility testing, implementing reporting tools like Report Portal, and integrating Alexa with Jira for AI-supported solutions. They also address whether the mentioned tools work with the UI or test the code, and if it's possible to analyze automation frameworks for flows. The general consensus is that the tools mainly work with the UI and functionality of the software, and while it's possible to analyze automation frameworks, it's important to test the current software's usefulness for a specific project or implementation. Additionally, one of the speakers mentions Spidering AI as a tool for writing unit tests that can identify tests on their own, reducing the need for developers or qas to write them. Finally, the topic of using AI with mobile testing and integration with other tools is brought up, but none of the speakers have personal experience with this specific combination.
  • 06:25:00 In this section, the speakers discuss the integration of AI in mobile testing and how it can enhance the agility and efficiency of running tests. They mention tools like Cloud properties and app team that leverage AI capabilities to generate comparison reports and identify performance discrepancies between different releases. They also highlight the importance of customization and configuration in implementing AI tools for usability and user experience testing. Additionally, they address the misconception that QA only means testing and talk about the value that QA can bring with the advent of AI.
  • 06:30:00 In this section, the speakers discuss the benefits of implementing AI in QA. They mention that one advantage is the ability to identify and correct mistakes faster, preventing failures and reducing waste of time and resources. They also highlight the automation capabilities of AI, which can significantly reduce the development time and improve code review processes. The speakers agree that there is still a lot of ground to be covered in bridging the gap between AI as a science and its application in general testing.
  • 06:35:00 In this section, the speakers discuss the challenges of implementing AI in various industries, including QA. They highlight the importance of having the right amount and quality of data to obtain accurate results from AIML systems. AI is compared to a constantly learning child, as it has the capacity to process massive amounts of information and learn at a faster pace than humans. However, the speakers emphasize the need for humans to also keep learning and adapting to AI systems to bridge the gap. They acknowledge that there is still a lot of ground to be covered in AI, and it requires exploring different areas and being adaptable to changes.
  • 06:40:00 In this section, the speaker announces an upcoming event called the Global Summit for Node.js, which focuses on testing. They provide the website link and promo code for attendees to check out and mention that the CEO of sqi, Jonas, will be speaking next. The speaker expresses excitement about Jonas joining the QA Global Summit again.
  • 06:45:00 In this section, the speaker discusses the topic of no code automation and its current trend. They share that they started their company as a no code automation tool but have since developed into a developer-focused automation tool, merging the no code and code approaches. The speaker shares that they have spoken to many test engineers about their thoughts on the no code trend and will be sharing their learnings and answers. They also provide some facts about the growing trend of no code, including its exponential growth and how it may be related to the digitalization trend caused by COVID-19. The speaker then dives into the definition of no code automation, highlighting some points they disagree with, such as the use of drag and drop interfaces and the reliance on graphical user interfaces. They argue that there are now more no code solutions that utilize voice or text inputs instead.
  • 06:50:00 In this section, the speaker discusses the rise of no code tools and their benefits for both technical and non-technical users. They argue that the main purpose of no code is to make our lives easier, regardless of our technical background. The speaker then explores different no code tools and their promises, such as speeding up development processes, better maintainability, and making automation accessible to anyone. However, they express skepticism about certain claims, such as eliminating flake tests. The speaker also delves into the approaches used by these tools, including record and replay, selector recording, relational descriptions, and AI object detection. They highlight the advantages and limitations of each approach.
  • 06:55:00 In this section, the speaker discusses different approaches to automation testing, specifically focusing on selector recording and pixel matching. They explain that instead of using selectors, which can be unreliable due to limited access to automation IDs, they suggest cropping out elements visually and using pixel matching or AI-enhanced object detection to locate specific elements. However, they note that this approach is more stable but requires the element to always look the same on the screen, which can be a drawback. The speaker also mentions that current no code automation tools are built on these approaches, but many test engineers struggle with maintenance and prefer to move away from these tools due to selector issues and limitations. The speaker then explains the tech limitations of applications and how they impact selector recording, mentioning shadow DOM, iframes, and canvas elements. They state that it is important to know the tech limitations of the application before choosing a tool for automation. Additionally, the speaker highlights that their own local tool uses AI object recognition to overcome selector problems but also addresses other challenges such as implementing loops and setting variables.

07:00:00 - 08:00:00

In this section, the speaker discusses the concept of a distributed ledger in blockchain technology and its role in keeping transactions secure. They explain that by distributing the ledger across multiple nodes, the record of a transaction is stored in various locations, making it difficult for hackers to compromise the system. The speaker emphasizes the importance of understanding the wider architecture of a blockchain system, including the hardware infrastructure, messaging, data, network layer, consensus layer, and application and presentation layer.

  • 07:00:00 In this section, the speaker addresses some of the questions and concerns about using a no-code automation tool. They caution against expecting it to solve all problems and replace testers, emphasizing that it should be seen as a support tool for testing. Additionally, they advise potential users to thoroughly evaluate the tool's limitations and requirements before making a purchase, suggesting checking the roadmap, getting demos, and trying out free versions if available. They also provide a timeline of the evolution of no-code tools, from early record and replay systems to AI-based solutions and future trends like autonomous testing. Overall, while acknowledging the value of no-code, they urge users to have realistic expectations and understand the limitations of these tools.
  • 07:05:00 In this section, the speaker discusses the value of no-code automation and its limitations. They mention that while no-code automation is slowly gaining usefulness, it may not be suitable for all scenarios. For small projects or startups with limited workflows, using no-code automation tools can be a simple and resource-efficient option. However, for larger enterprises with complex requirements and custom integrations, it is recommended to opt for test engineers and a proper setup instead. The speaker advises being aware of the limitations of no-code automation before implementing it. Additionally, the speaker talks about the potential use of AI for object detection and user interface element recognition. They explain how AI can be trained to recognize UI elements and answer questions about them, which can be useful in testing for things like e-commerce websites.
  • 07:10:00 In this section, the speaker discusses the concept of no-code and mentions GPT-3 as a tool that can help generate startup ideas using natural language input. They also mention the possibility of using models trained on test cases to come up with new test cases that testers may not have thought of before. The speaker then briefly touches on their experience with leveraging no-code automation, moving away from a manual process of clicking buttons and using natural language steps to developing an SDK with commands as a fluent API. They conclude by emphasizing that manual testing is not dead and expressing their hope that they have provided some value in their talk.
  • 07:15:00 In this section, Jenna Charlton discusses some of the common challenges in test automation for teams. These challenges include being in a time crunch, limited research and decision-making time, cost constraints, skill gaps, and decision paralysis due to the overwhelming number of options available. She emphasizes the importance of finding potential solutions to these challenges rather than doing nothing.
  • 07:20:00 In this section, the speaker discusses the different options for QA teams, including sticking with manual testing, starting an automation project, or exploring low and no code solutions. They highlight the potential downsides of doing nothing, such as eventual failures due to human error in manual testing, and the time, money, skills, and maintenance required for traditional automation. They also mention the potential technological limitations and costs associated with low and no code solutions. The speaker emphasizes the importance of finding the right solution that balances what works for the team and feels like the right choice. They focus on the benefits of low and no code solutions, such as speed to launch and cost-effectiveness compared to traditional automation, as well as the ability to bridge the gap between manual testing and automation.
  • 07:25:00 In this section, the speaker discusses the gap between business expertise and technical knowledge in testing and how it can be bridged using automation tools. They highlight the importance of involving business analysts and getting them to record tests, which can then be automated. The speaker also talks about the evolution of test recorders, starting from the early days of Selenium record and playback, which had limitations such as brittleness and lack of flexibility. They then introduce the three categories of new tools: AI-driven tools, selector-driven tools, and visual validation tools. The speaker emphasizes the need to choose tools that best suit the organization's needs.
  • 07:30:00 In this section, the speaker discusses the future of test automation and its integration with AI. They emphasize the importance of ethical AI and clarify that AI will not replace testers but rather be a tool to support their work. The speaker explains that AI-based automation is particularly beneficial for regression testing and repetitive tasks, freeing up testers to focus on higher-impact work. They highlight the advantages of AI, such as image comparison, self-healing, and reducing test maintenance. Ultimately, the speaker emphasizes that the goal is to lighten the workload of software development teams and prioritize the well-being of the humans involved in the process.
  • 07:35:00 In this section, the speaker emphasizes the importance of automation in enabling testers to engage in more meaningful work. They outline the various tasks that a testing team typically performs, including analysis, test design, test execution, reporting, regression cycles, and planning. The speaker also introduces the concept of test debt, highlighting how it can weigh down a team's progress. They draw an analogy to credit card debt, explaining how initially manageable test and tech debt can quickly become overwhelming when interest starts to accumulate. The speaker stresses that test debt is particularly troublesome as it tends to compound over time, making it harder for teams to catch up. They assert that low and no code automation tools can be instrumental in addressing these challenges by helping to test unaddressed defects and testability issues.
  • 07:40:00 In this section, the speaker discusses the concept of "test debt" and the consequences of not addressing it. They highlight several issues that contribute to test debt, such as outdated versions of tests, automation that is not up to date, and a lack of risk analysis. The speaker also emphasizes the importance of maintaining traceability between tests, stories, and associated risks. They encourage reaching out for more information and offer assistance with tools for web and mobile testing. The section concludes with the speaker inviting questions and expressing gratitude for the opportunity to present.
  • 07:45:00 In this section, the speaker introduces himself as a QA architect and also the owner of a board game company. He mentions that the focus of his talk is to help the audience understand blockchain technology and its importance in the global industry. He explains that blockchain is not just about Bitcoin and NFTs, but a distributed ledger that records transactions securely and can be accessed by multiple parties. The speaker aims to discuss how to identify and test blockchain technology, and also raises questions about the potential impact of blockchain on the industry.
  • 07:50:00 In this section, the speaker explains the concept of blockchain in simple terms. They clarify that a transaction can be any form of interaction online, not just cryptocurrency exchanges. When a transaction is requested, it creates a block that represents the transaction and adds it to the blockchain. This block is then distributed to every node in the network for validation. The speaker emphasizes the importance of security measures and cryptography to verify transactions. They mention the possibility of receiving rewards for the proof of work involved in processing and validating transactions. The different types of nodes in a blockchain network are discussed, including validator nodes that can initiate and receive transactions and data nodes that can validate the transactions. The speaker also notes that proof of work is not essential in all blockchain approaches.
  • 07:55:00 In this section, the speaker explains the concept of a distributed ledger in blockchain and its role in keeping transactions secure. By distributing the ledger across multiple nodes, the record of a transaction is stored in various locations, making it difficult for hackers to compromise the system. The speaker emphasizes that blockchain systems are not tested in isolation and highlights the importance of understanding the wider architecture of the system, including the hardware infrastructure, messaging, data, network layer, and consensus layer. Finally, the speaker mentions the application and presentation layer, which are the visible components of a blockchain system.

08:00:00 - 09:00:00

In this YouTube video titled "QA Global Summit 22.2 – Junior Track," the speakers discuss various aspects of testing blockchain technology and provide insights and recommendations for QA professionals. They highlight the benefits of blockchain technology, including enhanced security and privacy, as well as its real-world applications in areas such as smart contracts and healthcare records. The speakers emphasize the importance of understanding the functionality of blockchain applications, API testing, load testing, automation, and security testing. They also discuss the use of mocking in testing blockchain networks and recommend various frameworks and tools for testing and optimizing blockchain applications. Additionally, the speakers address topics related to software usage for individuals with ADHD and the importance of code optimization and testing tools. They also provide advice on automation for legacy products and the selection of automation and server app software. The session concludes with a discussion on the incorporation of manual and automated testing in the testing process.

  • 08:00:00 In this section, the speaker discusses the benefits of blockchain technology. They highlight that blockchain offers enhanced security, as it is difficult to compromise the data once it is written on the network. It also provides privacy by keeping records of transactions while keeping the identity of the individuals involved anonymous. The speaker gives examples of how blockchain can be used for verifying software downloads, mitigating DDoS attacks, tracking computer hardware components, and even tracing the origins of food in a retail store. They emphasize that these features make blockchain crucial in today's world where privacy and security are important considerations.
  • 08:05:00 In this section, the speaker discusses various real-world applications of blockchain technology and emphasizes the benefits they bring. They explain how blockchain can ensure transparency and trust in areas such as smart contracts, digital voting, healthcare records, and crowdfunding. By recording every transaction on the blockchain, it becomes accessible to everyone while still preserving the privacy of individuals. This enables better accountability, prevents fraud, and allows for more accurate and informed decision-making. The speaker suggests exploring the slides and highlights the exciting potential of blockchain technology. Finally, they mention the purpose of the video, which is to provide guidance on how to test blockchain applications.
  • 08:10:00 In this section, the speaker emphasizes several key aspects of testing blockchain-based applications. Firstly, understanding the functionality of the application and why using blockchain is necessary is crucial. API testing is also important since most blockchain applications rely on APIs to communicate. Load testing is essential to ensure efficient processing speed, as an inefficient algorithm can slow down the entire network. Automation is highly recommended for rapid and accurate updates in the blockchain network. Security is a top priority, although blockchain technology itself is generally secure, vulnerabilities can exist at the application layer. Therefore, thorough security testing is necessary. Lastly, the speaker mentions the importance of creating mock test environments, as fully replicating a production blockchain network is challenging.
  • 08:15:00 In this section, the speaker emphasizes the importance of using mocking in testing blockchain networks. Due to the variability of blockchain networks and the difficulty of replicating the production environment, relying on mocks can help ensure secure and effective testing. The speaker recommends mocking all the different dependencies, APIs, and communication channels involved in the network. By automating testing and achieving high coverage, developers can ensure that their code updates are secure and compatible with the network. The speaker also mentions several frameworks like Ethereum's TestRPC, Truffle, Drizzle, and Ganache that can facilitate setting up blockchain networks and building test frameworks. Solidity, a JavaScript-based language, is commonly used for programming blockchain networks, and the speaker provides examples of contract setups in Solidity. Overall, the focus is on the importance of mocking in testing and leveraging available frameworks for more efficient development.
  • 08:20:00 In this section, the speaker discusses the importance of testing blockchain technology before it becomes an immutable problem. They explain that by using mocks, developers can test different scenarios and ensure the right responses are received. They provide an example of a test script using the Ethereum network, where values are passed through and validated based on the expected outcome. The speaker emphasizes that blockchain technology can open up a data revolution by allowing for distributed data storage and eliminating the need for large data centers. They believe that blockchain has the potential to spark a data-driven revolution in the software industry, presenting new opportunities for developers and testers alike.
  • 08:25:00 In this section, the speaker discusses how ADHD can impact software usage and provides insights for QA professionals on how to take it into account when analyzing products. They highlight the overwhelming nature of busy web pages and inconsistent designs, emphasizing the importance of minimizing visual and auditory noise and maintaining consistency in user interface elements. Additionally, they suggest that offering features like dark mode can be beneficial for individuals with ADHD and autism spectrum disorder. The speaker also addresses the issue of garbage and obsolete data in blockchain and emphasizes the need for careful optimization and performance testing to ensure efficiency and cost-effectiveness. They emphasize the importance of getting it right before deploying to production and the responsibility that comes with creating unchangeable software.
  • 08:30:00 In this section, the speakers are asked about their recommendations for code optimization and testing tools. They mention various tools depending on the requirements, such as TestComplete, Tosca, Eggplant, and Pre-flight for web and mobile testing. They also mention that functionized is a good option for enterprise-grade testing. Additionally, they suggest reaching out to experts to recommend tools based on specific app needs. The speakers also discuss backend testing and mention that tools like Functionize and Testim are suitable for both frontend and backend testing. They mention Apple Tools for frontend testing and apologize if their information is incorrect. Lastly, they mention Ethereum Tester as a useful tool for testing on the Ethereum network in blockchain testing.
  • 08:35:00 In this section, the speaker discusses the different tools that can be utilized to create and test blockchain networks, such as truffle and ganache. They mention that there are already a lot of available tools, but in the next five to ten years, there will likely be even more tools specifically designed for blockchain testing. In another part of the video, the speakers address questions related to automated test scripts. They suggest considering the value and relevance of the existing 13,000 test scripts before deciding how to proceed with rewriting them. They also mention the option of using tools that can help with importing and organizing the test scripts. Additionally, they emphasize the importance of assessing the value and validity of manual and automated test cases before deciding on the next steps for release and regression testing.
  • 08:40:00 In this section, the speaker discusses the options for automating tests for legacy products and emphasizes the importance of delivering value through tests. They suggest using low AI-based and low code/no code tools to quickly build tests. The speaker also mentions the time and effort it may take to automate tests for legacy applications and advises rethinking the automation and testing strategy for optimal results. In another section, the speaker provides advice on working with blockchain technologies, suggesting focusing on the basics of the contract and not getting overwhelmed by the underlying details. For smaller projects, the speaker recommends exploring special tools that offer simple structures and automation possibilities, allowing testers to focus on exploratory testing. They also mention the potential benefits of AI in automating tests. Finally, when it comes to selecting a server app software for creating an automated pipeline, the speaker suggests considering Jenkins as a recommended option.
  • 08:45:00 In this section of the video, the speakers discuss different tools that can be used for automation and emphasize that the choice of tool depends on the specific goals and requirements of the user. They suggest looking at the tools used by companies you are interested in working for. When it comes to AI no-code tools for practicing Scrum, the speakers mention Tosca as a possible option that fits well into agile methodologies. They also discuss the use of no-code for NLP (natural language processing) and state that it is the foundation for NLP and AI. However, they are not aware of specific products for testing NLP models.
  • 08:50:00 In this section, the speaker mentions that there is a tool for chat bot and Bot testing, although they cannot remember the name. They encourage the viewer to reach out on either Twitter or LinkedIn for more information. The speaker also confirms that low code flows can be included in continuous integration and deployment and should be included in the testing pipeline. The section concludes with the moderator expressing gratitude for the session and announcing a short break before the next block of sessions begins.
  • 08:55:00 In this section, the speaker discusses the importance of incorporating both manual and automated testing in the testing process. They explain that manual testing and automated testing serve different purposes, with manual testing being comparable to an elite commando team performing specific tactical validations, and automated testing providing the infrastructure to achieve stability and faster product releases. The speaker emphasizes that both manual and automated testing are necessary to achieve high-quality products, especially in agile and DevOps environments. They also mention the concept of a "testing puzzle" where different testing methodologies come together to provide comprehensive coverage.

09:00:00 - 10:00:00

The QA Global Summit 22.2 – Junior Track video covers various topics related to software testing, including test clustering, the challenges of scale in testing, test orchestration, coordination and communication among teams, reporting and metrics, cognitive biases in testing, and the importance of subjective reporting. The speaker emphasizes the need for a more holistic approach to testing and highlights the importance of providing context and goals when presenting data. They also stress the value of considering subjective aspects of testing and focusing on customer experience. The video encourages feedback and discussion on these topics, with the promise of offline conversation.

  • 09:00:00 In this section, the speaker discusses the testing puzzle and the issue of test clustering. They emphasize that testing is more than just manual and automated testing, as there are various other components to consider, such as unit tests, integration testing, user acceptance testing, and customer feedback. The speaker also explains the concept of test clustering, where tests tend to cluster in certain areas, resulting in both over-testing and blind spots. They highlight the need for a more organized approach to testing in order to optimize coverage. Additionally, the speaker points out the differences between manual and automated testing, stating that automated test cases are more specific and defined, while manual tests are more extensive and exploratory.
  • 09:05:00 In this section, the speaker discusses the challenges of scale in testing and runs. They explain that automatic testing typically involves orders of magnitude more test cases and runs compared to manual testing. They also highlight the differences in timelines and the frequency of runs between automation and manual testing. The speaker emphasizes the need for better coordination and information sharing among different teams to improve test case reuse and integration. Additionally, they outline the different objectives of manual and automation testing, with manual testing focused on evaluating and discovering issues while automation testing aims for stability and certification. The speaker introduces the concept of test orchestration, comparing it to an orchestra where all instruments work together to create a bigger impact. They stress the importance of considering all testing tools and their contributions to generate greater value.
  • 09:10:00 In this section, the speaker discusses the orchestration process of testing, which involves defining the purpose and scope of the testing process and understanding the needs of stakeholders. They emphasize the importance of identifying gaps and determining how different testing inputs, such as manual and automated approaches, can contribute to faster releases. The speaker provides an example of how teaching developers to test manually better improved the efficiency of the testing process. They also highlight the need to break down tasks into smaller chunks and prioritize them based on timelines and resources. The speaker encourages taking the time to do this planning, even in the midst of an agile project, in order to avoid "flying blind" and focus on specific goals.
  • 09:15:00 In this section, the speaker discusses the implementation phase of coordinating teams and ensuring everyone is working together. They emphasize the importance of good communication and coordination across teams, including sharing tests and working on the same repository. The speaker also emphasizes the need for a single reporting or visibility framework, as having multiple systems can lead to coordination issues. They recommend finding a reporting platform that all teams can use and normalize the data to ensure consistency. Additionally, they highlight the importance of broadcasting the value of testing to other parts of the organization and suggest providing results through an integrated quality dashboard. The speaker also mentions the need to generate business value for stakeholders and tailor the dashboard to their specific needs.
  • 09:20:00 In this section, the speaker discusses the importance of making sure that important information is easily accessible to everyone. They suggest reusing the same information in multiple channels, such as through dashboards, emails, and status meetings, in order to improve diffusion and ensure that the information is properly absorbed. The speaker also emphasizes the need to balance centralized processes with the freedom for teams to try new tools and methods for innovation. Additionally, they highlight the importance of refactoring tests and making integrated testing a priority. The speaker concludes by reminding the audience that testing should always provide value to the system and that constant visibility and transparency are essential for a healthy testing process.
  • 09:25:00 In this section, the speaker discusses the frustration of test coordinators and managers when their reports and metrics are disregarded or not taken into consideration by the business. They initially blame the business for not understanding testing, but eventually realize that they may need to change the way they report their metrics. The speaker introduces the concept of subjective reporting as an alternative to objective reporting, using a painting to illustrate subjectivity and how it can be applied to testing. The goal is to provide stakeholders with information that is based on personal beliefs and feelings, rather than just facts.
  • 09:30:00 In this section, the speaker discusses the concept of objectivity in testing and highlights the presence of cognitive biases that can impact our perception of objective truth. They explain that metrics and data can give the illusion of objectivity, but cognitive biases such as negativity bias and bandwagon effect can influence our subjective reality. The speaker provides examples of how biases can affect testing, such as negative bias leading to a focus on open defects and bandwagon effect influencing others' beliefs. They emphasize the importance of being aware of these biases in order to maintain a more objective approach to testing.
  • 09:35:00 In this section of the QA Global Summit video, the speaker discusses examples of bias in testing. They share a humorous story about a colleague named Jack who always seemed to introduce new bugs whenever he deployed his work. This exemplifies the bandwagon effect, where expectations of bugs can lead testers to find more issues than necessary. The speaker also mentions unintentional blindness and how testers can miss obvious things, as demonstrated by the oversight of usability issues with a mobile app tested for functionality but not ease of use. Additionally, the speaker highlights the challenge of reporting metrics effectively, noting the overwhelming number of metrics available and the lack of clear goals or annotations for interpretation. Overall, bias and the need for more objective and concise reporting are key points in this segment.
  • 09:40:00 In this section, the speaker discusses the importance of providing context and goals when presenting data. They emphasize that without this information, different conclusions can be drawn from the same data. By adding the goal of executing 30 new feature test cases every month, they demonstrate how the interpretation of the graph changes. They also highlight the need for actionable insights and suggest automating regression testing to free up manual capacity for new feature testing. The speaker argues that subjective reporting can help reach a wider audience and mentions the importance of focusing on customer experience and production quality.
  • 09:45:00 In this section, the speaker discusses the importance of value assurance in addition to quality control and quality assurance. They explain that value has different aspects, such as product value, user experience, and customer experience. They emphasize the need to consider all these aspects to avoid the "watermelon effect," where only superficial aspects are tested while important user-related aspects are neglected. The speaker also mentions different ways to report on subjective aspects, including customer satisfaction metrics and the business mood board, which focuses on the business's happiness index.
  • 09:50:00 In this section of the QA Global Summit 22.2 – Junior Track, the speaker discusses the importance of subjective reporting in software testing. The speaker uses examples to demonstrate how subjective reporting can be used to address the business or audience's concerns and provide value, which is viewed as subjective. The speaker highlights the use of personas to describe user profiles and test cases based on those profiles. The speaker also stresses the importance of focusing on what the customer cares about the most in software testing, rather than sticking strictly to requirements without considering their translation.
  • 09:55:00 In this section, the speaker addresses the audience and invites them to share their feedback or thoughts on the topic discussed. They also express interest in hearing different perspectives and implementation strategies related to subjective reporting. The speaker then hints at the possibility of further conversation offline. After this, the speaker introduces the next speaker, Bob Mutoff, who will talk about the distinction between end-to-end and component tests.

10:00:00 - 11:00:00

In this section of the QA Global Summit 22.2 - Junior Track video, the speakers cover various topics related to automation and testing. They discuss difficulties in testing gameplay, demonstrate how to write Cypress tests for framework-specific components, explain component testing using Cypress, compare component testing to end-to-end testing, discuss the benefits of Cypress component testing, explore the implications of security in automation, explain the concepts of the automation pyramid and test readiness, discuss important principles in developing test scripts, and touch upon the topics of testability and automation adoption. They emphasize the importance of effective testing processes, collaboration between stakeholders, proper planning and recruitment, and clear communication.

  • 10:00:00 In this section, the speaker discusses the difficulties in testing gameplay, particularly when it involves randomness, timers, timestamps, and conditional testing. They show an example of testing a timer element and highlight the challenges of waiting for a 15-minute interval. However, they suggest that instead of writing an end-to-end test, a smaller piece of code, like a function, can be tested using unit tests. They demonstrate running a unit test with Cyprus.
  • 10:05:00 In this section, the speaker discusses how to write Cypress tests for framework-specific components. They explain that instead of using Cypress' site visit command, you can work with a per-component bundle inside the spec file. By selecting the component testing option in Cypress, it automatically detects the framework you are using. The test contains the component itself, imports CSS styles, and can import additional React context objects and third-party libraries. The Cypress component command is used to mount the React component and pass props. The Cypress site contains command is then used to verify that the component displays the expected text. The speaker emphasizes that the test is much faster than end-to-end tests because the component only needs to be bundled and there is no need to visit a URL. They also highlight that Cypress allows you to see the component live in a real browser, making it easier to verify its functionality and styling.
  • 10:10:00 In this section, the speaker discusses component testing using Cypress. They explain how component testing works by passing data as props and using Cypress commands to interact with the component and validate the output. The speaker mentions that this approach is particularly useful for framework-specific testing, as Cypress commands can interact with components in a way that is not specific to a particular framework. The Cypress team has examples of component testing for different frameworks and languages, making it easy for developers to understand and maintain these tests. They also discuss how passing data as props can be useful for end-to-end testing.
  • 10:15:00 In this section, the speaker discusses different testing approaches using Cypress, specifically focusing on component testing and end-to-end testing. They explain that component testing is ideal for testing small pieces of code, such as functions or classes, and allows for easier control of hard-to-manipulate data. On the other hand, end-to-end testing is best for testing the entire user flow and is useful for ensuring the application runs smoothly as a whole. The speaker suggests considering the specific goals of the test when deciding between these two approaches.
  • 10:20:00 In this section, the speaker discusses the benefits of using Cypress component testing to ensure the stability of web applications. They explain that component testing allows you to focus on the code of individual components, making it easier to write end-to-end tests. The speaker highlights that Cypress's API can be used for both component and end-to-end testing, allowing you to learn it once and apply it to different types of tests. They also mention the advantage of running component tests in a full browser, which provides visibility into each step of the test execution. Overall, component testing with Cypress puts the emphasis on web application code stability and offers a comprehensive testing experience.
  • 10:25:00 In this section, the speaker discusses the importance of understanding the implications and impacts on security in automation. They highlight examples such as clients using unencrypted files for storing sensitive login credentials and relying on physical carry-on devices for logging in. The speaker emphasizes the need to answer the questions of who, what, why, where, when, and how before implementing automation and to set proper expectations and communicate them clearly. They also delve into the different levels and types of tests that can be automated, the importance of dedicated and skilled staff, the role of dedicated management support, and the need to begin automation early in the project. Furthermore, the speaker discusses the implementation of automation, including the selection of frameworks and tools, and the modeling of the system under test to identify potential problem areas.
  • 10:30:00 In this section, the speaker discusses the concept of the automation pyramid as a guideline for testing. The pyramid represents the different levels or granularity at which tests are conducted, with the base being more fine-grained and focused on individual aspects, and the top being more coarse and focused on the overall system. The speaker also explains the importance of test readiness, which involves evaluating if a test is suitable for automation based on its clarity, objectives, data requirements, and endpoints. Additionally, the speaker talks about the benefits of grouping tests into suites, which helps with automation management and organization. Different criteria for grouping include test type (such as unit tests and regression tests), functionality coverage, and dependency levels. Overall, the speaker emphasizes the importance of effective test classification, organization, and development for efficient and reliable testing processes.
  • 10:35:00 In this section, the speaker discusses important principles to remember when developing test scripts, such as making them stand alone, minimizing dependencies, and ensuring modularity and maintainability. The script should be designed to be reusable, compact, focused, and testable. The speaker then explains the key components of a test, including setup, execution, analysis, reporting, and cleanup. An implementation plan is also discussed, which serves as a roadmap for organizing and managing automation tasks within a project. Various approaches and considerations, such as test environment and data, as well as recruitment and job descriptions, are highlighted. The speaker emphasizes the importance of careful planning and consideration when recruiting, and makes recommendations for attracting the right candidates. Different levels of skills for junior, intermediate, and senior testers are outlined, with a focus on programming, problem-solving, communication, and quick learning ability.
  • 10:40:00 This section of the QA Global Summit 22.2 video covers the topic of testability and its role in automation teams. Testability refers to the ability of a software artifact to support testing in a given test context, including modules, requirements, and designs. Improving testability makes it easier to fund faults within an artifact via testing and increases the amount of testing needed. Testability also leads to a higher level of automatability and benefits the team with efficient building of automated tests, increased reliability, and repeatability. However, there are upfront costs to consider, and the development team may need to separate logical page structure from presentation, locate page elements by ID or another unique identity, open up APIs and services, and use a unit test harness. Convincing management of the importance of automation and the need for full attention and investment is also discussed.
  • 10:45:00 In this section, the speaker discusses several key points related to automation in testing. Firstly, they emphasize the importance of quick feedback in order to fix errors and prevent rework. They also mention the need for smart implementation to minimize maintenance costs. The speaker highlights that not all tools support all systems and technologies, so collaboration with the development team is crucial to ensure proper access and functionality. Additionally, they address the fact that not all testers can write scripts and that specialized resources are necessary. The speaker warns against believing all the hype surrounding automation and emphasizes the need for proper communication skills to sell the importance of testing to various stakeholders. They stress the importance of defining what to automate, building testability into the application, and dispelling misconceptions about automation. Lastly, they mention their book which provides more in-depth information and resources for automation efforts.
  • 10:50:00 In this section of the QA Global Summit 22.2 – Junior Track video, the speakers discuss how to convince management to adopt a testing approach and when is the best time to implement it. They emphasize the importance of dispelling myths and misconceptions about automation and testing, explaining to management that it is its own form of software development and should be seen as a product. They also highlight the need for risk management and collaboration between stakeholders, including developers and testers, to ensure a qualitative delivery of work. Additionally, they stress the importance of proper code management and following best practices.
  • 10:55:00 In this section, the speakers discuss the importance of stakeholders in a project, including management, web developers, the QA team, and product owners. They highlight the need for stakeholders to understand the testing process and its impact on the end product. The discussion moves on to the best time to introduce testing tools and processes, with the suggestion to focus on processes upfront and introduce tooling after determining the approach. They also caution against haphazardly grabbing a tool without understanding the project's requirements and design. The conversation concludes with a discussion on how to handle team members who dislike testing without any factual or metric basis, emphasizing the importance of addressing their concerns and educating them on the benefits of testing.

11:00:00 - 11:20:00

The Junior Track of the QA Global Summit featured discussions on various topics related to software testing. Speakers shared their experiences of overcoming resistance within their teams to prioritize testing and highlighted the importance of being hands-on and working closely with the team. The challenges of testing in the metaverse and creating realistic data for testing purposes were also discussed. Additionally, the speakers emphasized the need to adapt and find practical solutions in the ever-evolving landscape of technology. The importance of integration testing, measuring software quality with metrics, and seeking customer feedback were also mentioned. The section concluded with gratitude and announcements for future events.

  • 11:00:00 In this section, the speaker shares their experience of facing resistance when trying to convince their team to prioritize testing. They decided to take a pragmatic approach by setting up a small form for entering defects and diving into analyzing those defects with the team. This helped gain their team's trust and eventually led to the discovery of a real issue. The speaker emphasizes the importance of being hands-on and working closely with the team to create awareness and acceptance for testing. The discussion then shifts to the challenges of testing in the metaverse and the complexity of creating realistic data for testing purposes. Overall, the speakers highlight the need to adapt and find practical solutions in the ever-evolving landscape of technology.
  • 11:05:00 In this section, the speaker discusses the potential risk of integrating data creation and cleanup processes into automation pipelines, as it may increase the overall time taken for test execution. They highlight alternative approaches such as performing a full provision of the environment before running tests or using pre-restored database images. The speaker also emphasizes the importance of regularly cleaning up environments to maintain a clean and known state. They mention the need to tear down environments after tests to avoid unexpected costs in cloud environments. Additionally, the speaker shares their experience of investigating system crashes and latency issues that could impact test results. In terms of ensuring the longevity of automation tests, they provide examples from past projects where data was separated from the tests and external databases were used to store temporary data. However, they acknowledge the need for maintenance and, at times, rebuilding or refactoring of tests and software after an extended period.
  • 11:10:00 In this section, the speaker discusses the importance of testing for edge cases in software systems, especially when APIs or database schemas change. They highlight that it's easy to miss cases where old data in the system may not align with new validation rules, leading to crashes. They suggest that tests should be powerful enough to simulate these edge cases and bypass the user interface if necessary. In addition, the conversation shifts to discussing the different types of metrics that can be used to measure software quality, such as defect densities and coverage metrics. They emphasize the significance of metrics in determining the stability of an application and making informed decisions about shipping. The speaker also recommends continuously seeking customer feedback, particularly when delivering features incrementally in an agile environment, to ensure that the user experience aligns with expectations and values.
  • 11:15:00 In this section, the speaker discusses the importance of continuity checks in business development, highlighting how external factors can influence customer decisions. They also define value in three aspects: product value, user experience, and customer support. A personal example is given about a website called Bricklink, which has outdated design but valuable features. The panelists emphasize the need for testing in the context of application, staying calm during failures, and taking a moment to think before acting. They also mention the benefits of taking breaks and distracting oneself when facing challenges. The moderator thanks the panelists for their insights.
  • 11:20:00 In this section, the speaker expresses gratitude to all the attendees, speakers, and moderators for making the QA Government Summit possible. They mention that the event has been a long and successful one, with another 11 hours of content planned for the next day. The speaker also announces an upcoming no-code event in December and encourages interested viewers to register. They appreciate the efforts of the organizers and emphasize the professionalism of the event. Finally, they thank Julia for her excellent work in organizing the QA event and sign off, expressing excitement for the next day's lineup of speakers.

Copyright © 2024 Summarize, LLC. All rights reserved. · Terms of Service · Privacy Policy · As an Amazon Associate, summarize.tech earns from qualifying purchases.