Quantcast
Channel: Independent Testing - QATestLab
Viewing all 102 articles
Browse latest View live

Comparison of Verification & Validation and Quality Assurance View Points

$
0
0

Most quality assurance activities which are carried out directly in the software development process can be classified as verification activities, while quality assurance activities which are associated with the technical requirements of the users at the very beginning or at the very end of the engineering process are classified as validation activities.

Comparison

On the other hand, software bug averting is usually focused on the earlier stages of software system development, different software bug reduction activities are usually focused on the middle and late stages of software development and software bug containment is concentrated on operational stage and it is associated with design, planning and realization, which were performed earlier.

Through the comparison of software bug averting, reduction and containment, it becomes possible to connect verification with software bug reduction activities, and validation with error averting and error tolerance.

But there are some inaccuracies and corrections due to the fact that many of the quality assurance activities are associated with both verification and validation activities. For instance, the focus of the acceptance test is obviously validation, while the focus of the unit testing is verification, but system testing consists of the verification and validation components.

Inspections as an important failure reduction activity are similar to testing. Nevertheless, due to the lack of implementation and direct observations of defects, checking is more related to verification than to validation. For instance, a lot of the checking activities are carried out with standard verification activities such as coding or design. The less used technical requirements and scenarios are based on checking are connected validation.

Software bug averting deals with mistake source removal and blocking of software bugs, when verification and validation are associated with failures and faults. Consequently, direct link between error prevention and the validation and verification processes in terms of quality assurance activities is absent, however, only circumstantially through the preventive operations.

For instance, if the goal is to eliminate the indeterminacy in the technical demands or in knowledge about the domain, then activity is indirectly related to the validation and if the purpose is blocking of synthetic errors or other defects by proper selection and applying of processes, techniques and methods, it is indirectly associated with verification.

The use of formal methods such as quality assurance operations is closely associated with error prevention and formalized checking. The formal specification part is associated with validation, but circumstantially refers more to the defect prevention. The formal verification belongs to the verification activities, verifying the design or program accuracy according to its formal characteristics.

Software bug tolerance, safety assurance and others error containment activities refer more to validation than verification because of their concentration on the avoiding or minimizing of global disruptions under current operational environments.

When such error containment features are specific for software or intruded system, compliance with this part of specifications can be considered as well as other actions aimed at testing the compliance with specifications.

This relationship between these two views is shown in Table 1, for each of the error-centered view class and related main quality assurance activities. They are displayed according to their connection with the verification, validation, or both at once, directly or indirectly.

QA activities


Quality Engineering and Quality Improvement Paradigm

$
0
0

Quality improvement can be achieved with the help of measurement, analysis, feedback and organizational support.

Such structure is called quality improvement paradigm.

Quality Engineering and Quality Improvement Paradigm

It comprises 3 interconnected phases:

Comprehending

This phase is to comprehend the baseline so that betterment opportunities can be identified and plain, measurable purposes can be reached. All future process modifications are measured against this baseline.

Estimation

The next phase is to introduce process modifications through experiments, pilot projects, estimate their influence, and fine tune these process modifications.

Packaging

And the final phase is to package baseline data, experiment outputs, local experience, and updated process as the way to infuse the findings of the betterment program into the development company.

The approach to quality engineering can be thought-out as an adaptation of quality improvement paradigm to ensure and evaluate quality, to manage quality anticipations of target clients.

Pre-QA activities approximately conform to the Comprehending phase in quality improvement paradigm.

The implementation of quality assurance strategies conform to the modifications in the Estimation phase in quality improvement paradigm.

Analysis and feedback (or post-quality assurance) activities overlap with both the Estimation and Package phases in quality improvement paradigm, with the analysis part approximately conforming to the quality improvement paradigm – Estimation phase and the longer term feedback approximately conforming to the Package phase.

The Meaning of Functional Testing In Software Project. Part I

$
0
0

Functional testing examines the right handling of the external functions of software product, with the help of monitoring the program external behavior at the time of implementation.

The Meaning of Functional Testing In Software Project

Software is treated as a black-box, with the external behavior monitored through its input and output. That is why it is generally referred to as black-box testing. So these 2 terms can be used interchangeably.

The easiest way of black-box testing is to begin running the software and monitor in the hope that it is easy to differentiate expected and unexpected behavior. Such way of software testing is considered as “ad hoc” testing as well.

Such unexpected behavior, as a crash, is easy to identify. As soon as we define that it is caused by software through repeated implementation to eliminate the probabilities of hardware troubles, we can pass the information to responsible parties to have the defect corrected. Actually, this is the general way through which defects found by clients are reported and corrected.

It is also another general way of black-box testing. It is the use of specification checklists, which list the external functions that are supposed to be present, as well as some information about the expected behavior or input-output pairing.

The term input means any action or resource in the process of running a program.

The term output means any action or result produced by the running program.

Concrete samples of input to a calculator might comprise the specific numbers entered and the action requested, such as division operation of 2 numbers. The output could be the actual division result, or some error message, such as when trying to divide by zero. When defects are observed, specific follow-up actions are conducted to correct them.

What Is Usage-Based Statistical Testing?

$
0
0

Actual client usage of software may be considered as a form of usage-based testing. In the case if defects are detected by clients, some information about them can be reported to software vendors, and integrated fixes may be created and delivered to all the clients to avert such defects.

What Is Usage-Based Statistical Testing

Nevertheless, correcting software bugs after release could be very costly because of the massive quantities of software installations. Frequent fixes could also harm the software vendor’s reputation and long-term business vitality.

If the actual usage or expected usage for a new product can be captured and used in software testing, product reliability could be most straightly guaranteed.

In usage-based statistical testing the testing environment resembles the actual operational environment for the software in the field, and the overall testing sequence, as represented by the orderly implementation of specific test cases in a test suite, resembles the usage scenarios, sequences, and templates of actual software usage by the target clients.

Because the huge quantity of clients and diverse usage templates cannot be captured in an implementation set of test cases, statistical sampling is required, that is why we use the term “statistical” in this strategy. We can also use the term “random testing”.

Usage-based statistical testing is commonly appropriate to the final phase of software testing, normally referred to as acceptance testing right before product release, so that stopping testing is of equal worth to the product release.

More lately sub-phases of testing, such as integration and system testing, could also favor the knowledge of actual client usage situations to drive effectual reliability betterment before product release. Certainly, the termination criterion used to stop such testing is fulfillment of reliability purposes.

Comparison of Verification & Validation and Quality Assurance View Points

$
0
0

Most quality assurance activities which are carried out directly in the software development process can be classified as verification activities, while quality assurance activities which are associated with the technical requirements of the users at the very beginning or at the very end of the engineering process are classified as validation activities.

Comparison

On the other hand, software bug averting is usually focused on the earlier stages of software system development, different software bug reduction activities are usually focused on the middle and late stages of software development and software bug containment is concentrated on operational stage and it is associated with design, planning and realization, which were performed earlier.

Through the comparison of software bug averting, reduction and containment, it becomes possible to connect verification with software bug reduction activities, and validation with error averting and error tolerance.

But there are some inaccuracies and corrections due to the fact that many of the quality assurance activities are associated with both verification and validation activities. For instance, the focus of the acceptance test is obviously validation, while the focus of the unit testing is verification, but system testing consists of the verification and validation components.

Inspections as an important failure reduction activity are similar to testing. Nevertheless, due to the lack of implementation and direct observations of defects, checking is more related to verification than to validation. For instance, a lot of the checking activities are carried out with standard verification activities such as coding or design. The less used technical requirements and scenarios are based on checking are connected validation.

Software bug averting deals with mistake source removal and blocking of software bugs, when verification and validation are associated with failures and faults. Consequently, direct link between error prevention and the validation and verification processes in terms of quality assurance activities is absent, however, only circumstantially through the preventive operations.

For instance, if the goal is to eliminate the indeterminacy in the technical demands or in knowledge about the domain, then activity is indirectly related to the validation and if the purpose is blocking of synthetic errors or other defects by proper selection and applying of processes, techniques and methods, it is indirectly associated with verification.

The use of formal methods such as quality assurance operations is closely associated with error prevention and formalized checking. The formal specification part is associated with validation, but circumstantially refers more to the defect prevention. The formal verification belongs to the verification activities, verifying the design or program accuracy according to its formal characteristics.

Software bug tolerance, safety assurance and others error containment activities refer more to validation than verification because of their concentration on the avoiding or minimizing of global disruptions under current operational environments.

When such error containment features are specific for software or intruded system, compliance with this part of specifications can be considered as well as other actions aimed at testing the compliance with specifications.

This relationship between these two views is shown in Table 1, for each of the error-centered view class and related main quality assurance activities. They are displayed according to their connection with the verification, validation, or both at once, directly or indirectly.

QA activities

Quality Engineering and Quality Improvement Paradigm

$
0
0

Quality improvement can be achieved with the help of measurement, analysis, feedback and organizational support.

Such structure is called quality improvement paradigm.

Quality Engineering and Quality Improvement Paradigm

It comprises 3 interconnected phases:

Comprehending

This phase is to comprehend the baseline so that betterment opportunities can be identified and plain, measurable purposes can be reached. All future process modifications are measured against this baseline.

Estimation

The next phase is to introduce process modifications through experiments, pilot projects, estimate their influence, and fine tune these process modifications.

Packaging

And the final phase is to package baseline data, experiment outputs, local experience, and updated process as the way to infuse the findings of the betterment program into the development company.

The approach to quality engineering can be thought-out as an adaptation of quality improvement paradigm to ensure and evaluate quality, to manage quality anticipations of target clients.

Pre-QA activities approximately conform to the Comprehending phase in quality improvement paradigm.

The implementation of quality assurance strategies conform to the modifications in the Estimation phase in quality improvement paradigm.

Analysis and feedback (or post-quality assurance) activities overlap with both the Estimation and Package phases in quality improvement paradigm, with the analysis part approximately conforming to the quality improvement paradigm – Estimation phase and the longer term feedback approximately conforming to the Package phase.

The Meaning of Functional Testing In Software Project. Part I

$
0
0

Functional testing examines the right handling of the external functions of software product, with the help of monitoring the program external behavior at the time of implementation.

The Meaning of Functional Testing In Software Project

Software is treated as a black-box, with the external behavior monitored through its input and output. That is why it is generally referred to as black-box testing. So these 2 terms can be used interchangeably.

The easiest way of black-box testing is to begin running the software and monitor in the hope that it is easy to differentiate expected and unexpected behavior. Such way of software testing is considered as “ad hoc” testing as well.

Such unexpected behavior, as a crash, is easy to identify. As soon as we define that it is caused by software through repeated implementation to eliminate the probabilities of hardware troubles, we can pass the information to responsible parties to have the defect corrected. Actually, this is the general way through which defects found by clients are reported and corrected.

It is also another general way of black-box testing. It is the use of specification checklists, which list the external functions that are supposed to be present, as well as some information about the expected behavior or input-output pairing.

The term input means any action or resource in the process of running a program.

The term output means any action or result produced by the running program.

Concrete samples of input to a calculator might comprise the specific numbers entered and the action requested, such as division operation of 2 numbers. The output could be the actual division result, or some error message, such as when trying to divide by zero. When defects are observed, specific follow-up actions are conducted to correct them.

What Is Usage-Based Statistical Testing?

$
0
0

Actual client usage of software may be considered as a form of usage-based testing. In the case if defects are detected by clients, some information about them can be reported to software vendors, and integrated fixes may be created and delivered to all the clients to avert such defects.

What Is Usage-Based Statistical Testing

Nevertheless, correcting software bugs after release could be very costly because of the massive quantities of software installations. Frequent fixes could also harm the software vendor’s reputation and long-term business vitality.

If the actual usage or expected usage for a new product can be captured and used in software testing, product reliability could be most straightly guaranteed.

In usage-based statistical testing the testing environment resembles the actual operational environment for the software in the field, and the overall testing sequence, as represented by the orderly implementation of specific test cases in a test suite, resembles the usage scenarios, sequences, and templates of actual software usage by the target clients.

Because the huge quantity of clients and diverse usage templates cannot be captured in an implementation set of test cases, statistical sampling is required, that is why we use the term “statistical” in this strategy. We can also use the term “random testing”.

Usage-based statistical testing is commonly appropriate to the final phase of software testing, normally referred to as acceptance testing right before product release, so that stopping testing is of equal worth to the product release.

More lately sub-phases of testing, such as integration and system testing, could also favor the knowledge of actual client usage situations to drive effectual reliability betterment before product release. Certainly, the termination criterion used to stop such testing is fulfillment of reliability purposes.


Why QA Companies Use the Offshore-onsite Model

$
0
0

Globalization has reached different spheres of life and IT is not an exception. Each business tries to reduce costs and derive more benefit. To reach these aims, the companies apply an offshore model.

Onsite offshore model is the most popular way of working under agile development conditions. Software testing is such a part of IT world where offshore model is mostly applicable.

The main peculiarity is that a client and a test team may be located at the opposite sides of our Earth, as well as across the street. So, let us consider some positive moments that both software testing company and a product owner may find out.

Why is offshore-onsite model efficient?

  • It provides money saving. Usually, offshore team rate is lower than in-house one. This is because it is much cost effective to have several external specialists rather than to hold an entire QA team.
  • It saves time. Lag time plays on you. Use this advantage, in order a test team of the skilled specialists from Ukraine work efficiently while your developers have a dream.
  • It provides optimized communication. Modern technologies such as Skype, diverse mailbox services or IMA help you be always in-touch with a test team. This plus is important for getting familiar with the product peculiarities, business strategy, and workflow.
  • The qualitative interaction between the QA lead, test team, and a client is a key to the best results.
  • It provides leveling of risks. Having engaged offshore test team in your project, there is no need to get worried about the network problems, power interruptions or even natural catastrophe.
  • It ensures a quick growth. An offshore model suggests the professionals of different sorts. Whether it is required to perform manual testing, automated testing, usability checking, stress control, unit testing or many others, external labor ensure its successful execution.

What does testing on virtual machines hide?

$
0
0

Using virtual machines (VM) during software testing saves time and money. VM emulates real personal computers, programs and devices. The emulation programs and its OS are called virtual machine and the main OS and physical computer – host system. Any application or operating system on virtual machine works as if it was installed on real computer.

The usage of VM is rather cost-effective. It enables the specialists to perform different types of testing, e.g., cross-browser testing and multi-platform testing, using a single computer. App is launched under various VM and in different browsers. Also, testing on virtual machines pursues several goals – the execution of regression testing and functional testing of client-server applications. And moreover, all these checking processes can be automated.

The virtualization approach brings the following advantages during software testing:

  • The team checks the software under unlimited number of user configurations. VM helps to detect the programs that are potentially incompatible.
  • Virtual machines are very convenient – a severe error on VM will not affect the physical computer.
  • It is very easy to reproduce the backup process on virtual machines. Tester just copies the required folder or create a snapshot.
  • Testers are able to clone different virtual machines with their current states saved. This can be linked cloning or a full one.

But despite all the above-mentioned benefits of VM, they still have some disadvantages:

  • Vendors of VM do not support all platforms, so testers cannot emulate all the devices.
  • The test team may face the equipment conflict – drivers of virtualization systems are in the conflict with test equipment.
  • The specialist cannot increase the space of the VM disk if it contains snapshots.

The specialists emulate either several computers with different OS types and versions or create a virtual lab. But if you want to configure a virtual environment for several machines, then you need a physical host.

Can cyber security culture ensure hacker’s wipeout?

$
0
0

Recent successful cyber attacks force to change the attitude to cyber security culture that is closely connected with the company’s culture as well.

Poor corporate culture evokes not only problems in management but also with data protection. Do your employees use personal devices for work? Such misuse of unmanaged devices leads to security breaches. Of course, nobody wants to limit employees’ rights and freedoms but such position may affect the company’s data and reputation. And to prevent such situation, companies should draw attention to security consciousness of their specialists.

Show the threats

First of all, show your employees potential threats – from awareness of hacker’s abilities and most common attempts to break security and get unauthorized access to all corporate data. But one training dedicated to the usage of non-secure network or phishing emails is not enough. Make sure that training courses and seminars have permanent nature as hacking is dynamic.

Make cyber education regular

Do not disregard the quality of training programs – courses that should highlight current hacker techniques. Ensure post-training checkpoints to evaluate the effectiveness of courses and take measures if required. This helps to create “human firewall” and reduce security vulnerabilities.

Implement business risk intelligence

Also, companies implement business risk intelligence (BPI). This practice is designed to prevent cyber attacks and mitigate their consequences on the basis of information provided on the Deep and Dark Web. And such tactic works. But this is only the top of an iceberg.

Beware of cloud

Even the latest security technologies will not ensure full protection if you employees neglect the preventive means for hacker attacks, for example, use of weak passwords or unprotected cloud servers. In such case, this is the waste of investments.

Remember, employees do not take the last line in terms of security. In some cases, they are number one reason for system breaches. Password sharing, unsafe network, suspicious emails – all that cause data leak.

Cyber security culture is crucial for business. So, let us always be secured!

Software failure: how to avoid Murphy’s law?

$
0
0

Have you ever downloaded an app from Google Play or Apple App Store, used it for a couple of minutes and then deleted? Perhaps, many people come across this situation. The application loses 77% of daily active users in three days after the installation data. Reasons why we don’t like some applications are different – from design and interface and up to security.
We wonder why so many companies release to the market awful apps of poor quality because such products will face an inevitable end. Let’s analyze We will go through the main stages of software development and find out what’s the matter.

Failure #1 – disregarding market

Development of any software product starts with an idea. You cannot create something on the spot without any market researches. Of course, you can do that but it will make no sense. So, define who will use your app and why. You should have a clear understanding what users you target – businessmen, lawyers, teachers, doctors, housewives, etc.

The analysis of potential audience helps to specify common user scenarios and define main features of your app. Don’t forget that you are going to create a product for people just like you. There are too many useless things in our life and try not to add one more. Remember a useful app equals much-in-demand product that will boost your profit.

Failure #2 – neglecting your competitors

At the stage of market analysis, monitor your competitors. If you want to develop, for example, a new grocery delivery app, then make an investigation of the products already available on the market. Look through the user’s comments and feedback in order to avoid the same mistakes.

Failure #3 – disregarding software basis

Now picture to yourselves that you have analyzed the market and defined the app functionality. But software development lifecycle also prepares many pitfalls. Current development process is divided into several releases to get to the market with minimum viable product. In such case, product depends on OS and third-party APIs that have own issues.
Android and iOS have different specific features and interface peculiarities that can cause negative user experience. For instance, Android products have ‘Back’ button in-built and iOS don’t. If it is a multi-platform software, then developers should take into account platform-specific details.

Failure #4 – poor testing

Improper testing is among the most common reasons of software fails. Testing covers not only system functionality but also other aspects that ensure positive user experience.

Software testing procedure includes:

  • check of system security and access credentials,
  • control of functionality under different network connections,
  • analysis of product usability,
  • examination of interface and design,
  • check of all content including notifications and error messages, etc.

Testing should be complex and ensure as full product coverage as possible. Remember, there are no bugs if you are not looking for them.
The difference between failure and success hides in market research, audience analysis, thorough testing, useful features, catching content and design and hundred other things :)

Positive UX – how to make wow effect simple

$
0
0

Do you return home if you forget your phone on a table? Can you live at least one day without any gadget or PC? According to the report made by iPass in 2011, more than 60% of mobile workers sleep with their phones. In 2015, the magazine Fortune said that 71% of Americans either sleep with smartphones or put them nearby. Moreover, a smartphone is the first thing that 35% of Americans think about when they get up in the morning.

Why do people fail to spend an hour without smartphones or tablets? And phone is not the point, applications really matter. Interesting idea, wide functionality and nice-looking design draw our attention and eat our time. Some programs are developed for business and others for entertainment. But their popularity will continue growing without any doubts.

Nevertheless, in order to be in demand, current applications should be really of a high quality and bring something new. And the most important about it is that their design should be really great. The first interaction between user and software starts with UI (user interface). If UI does not get a user interested, he / she will not buy and install it.

Besides that, every successful software ensures positive user experience (UX). And there is a huge number of mistakes that foredoom the product to failure. Now we are going to have a look at the most common ones.

No prejudice, no ego

Usually, designers suffer from one common mistake. If you create something, you feel responsibility for your creation. But a successful designer is able to isolate own interests and preference in order to ensure a memorable user experience. The product is developed to satisfy the needs of the whole target auditory and not your own ones. Think big and do not create the software for yourself only.

Inconvenient wow effect

Have you ever heard “ugly but useful trumps pretty but pointless”. While developing software, we should keep the golden mean. Never neglect usability! Even if the product is well-designed but inconvenient and unintuitive, it will fail. To avoid usability issues, make some research to identify the problematic areas and specify the user’s preferences. Keep in mind that you create software for people. A clear and easy-to-use interface makes the best possible “wow effect”.

Creativity vs prototypes

A wish to create something extraordinary and new is not so good if we talk about software. Let’s imagine an online-store. What elements does it have? Product catalog with prices and descriptions, shopping cart, order form, etc. If you design a totally new concept of online store, in the majority of cases end users will not understand what it is and how to use this site. Take into account the past user experience and follow prototypes. They help to simplify discoverability.

Performance – crucial for UX

Probably, this statement may cause disagreements but still, positive UX and high performance are interconnected. Permanent error messages and low operating speed put users on edge and even perfect design does not change the situation. Surely, aesthetic experience is important but not the essential thing.

Based on the above-mentioned mistakes, several UX principles can be specified. According to Leo Frishberg, there are three key principles: beauty, firmness and utility. This resembles BTU model – Business / Commodity, Technology / Soundness and User / Delight.

And remember no UX is perfect. And it is quite difficult to keep things simple. But let’s try.

Most expensive bugs of 2016

$
0
0

In 2016, almost 50% of the world’s population faced different software bugs, according to a software testing company Tricentis. Just imagine, nearly 4,4 bn people were affected by software issues. How is that possible? It is called software “butterfly effect” – “small variation in the initial conditions of a dynamic system may produce large variation in the long-term behavior of the system”.

According to the Tricentis’ “Software Fail Watch: 2016 in Review ”, there are high chances that you were impacted by a software error last year or even last week. Failures are everywhere – they may occur in various domains with different frequency. For example, the average number of software issues in government field is 15 per month, for retail and transportation – 9 errors per month. The industries of finance and entertainment are the most reliable – only 2 failures per month.

In comparison with 2015, the total number of software bugs increased by nearly 12% in 2016. At the same time, the number of expenses on project also grew as well as the number of affected users and companies.

Tricentis divided all detected failures into three main categories: 1) embedded – pre-installed software, 2) mobile / cloud – web-based software, 3) on-premise – software that requires installation and specific environment. And according to the frequency of occurrence, on-premise software took the number one position. The matter is that such software can be found within almost every market domain.

Now we are going to have a look at some of the most expensive and well-known bugs of 2015 – 2016.

Nissan’s Airbag

The errors of airbag sensory system made the company recall more than 1 million vehicles during 2 years. In some models, the airbag might deploy even when the door was slammed. In other cases, the system could not detect whether an adult or child was sitting in the passenger seat. The software glitch caused high expenses.

Starbucks Breakdown

Starbucks suffered because of ‘issue during a daily system refresh’. Being unable to process orders and take payments, the workers had to offer free drinks. A massive outage caused high losses.

Casino Software Issue

The Goodna Services Club disappointed an old lady from Australia very much. The woman won the jackpot $65,054. But the slot machine was designed in such a way that a user could win maximum $10,000 and not $65,000. The Club said that it was a software error and refused to pay off the jackpot.

United Airlines Tickets

The issue in automated system of ticketing and reservation made United Airlines ground nearly 5 thousand flights. Being unable to monitor the boarding procedure, crews could not confirm that all passengers were on the board. A number of flights were delayed.

Bugs occur in different market fields and cause many troubles. And to avoid software testing is unacceptable. Try to use the services of independent testing as well. Let’s try to stop the bugs invasion.

Shift-right or shift-left? What testing to choose?

$
0
0

Are you sure that shift-left testing is a new approach? Actually, in 1950 IT specialists knew that testing performed at the very beginning of the project was more resultative. And at that time developers wrote the code and conducted testing. No independent testers were available. A waterfall model gave birth to separate test teams and divided project teams into dev and test ones.

The majority of software teams works according to the waterfall model. It contains a definite number of steps that are performed in a specific order to create a new software product or implement a new feature. These steps include the stage of system requirements, software requirements, analysis, system design, coding, software testing and the final step – operations. This model is considered to be a traditional one.

But the waterfall model makes testing a bottleneck during software development. The practice to test a build / system / feature during the last few days before release is not working. Test team should retest the builds with fixed issues to make sure the errors have been resolved without any after-effects. So, testing at the last stages of development procedure requires more time and may cause new issues.

According to a shift-left model, testing starts from the very beginning of software process. Testers get acquainted with system requirements, take part in design sessions – they are going hand in hand with developers. During all this time, QA specialists design possible test scenarios, define areas with potential bugs, etc. They have time to prepare for the testing procedure and to better understand the system behavior.

Now you may think that we will review the specifics of shifting testing right because it is an effective technique that demonstrates visible results. But I have to disappoint you. We are going to talk about shifting testing right. Don’t be surprised, in a moment you will understand everything by looking into the entrails.

Shift-right? What is that?

To shift testing right does not mean to move it at the very far end of dev procedure. No. It means to continue testing under the preproduction and production conditions. By shifting right, testers are able to detect the issues that cannot be discovered during development.

Usually, shift-right includes the acceptance and deployment of software, A/B testing, non-functional and exploratory tests and finally advanced testing. In other words, a shift-right approach is software testing in post production environment.

By gathering feedback from target audience, shift-right testing ensures the accurate evaluation of product functioning under real world conditions. It helps to define the unexpected issues:  crashes, low performance, errors, etc, that appear only in production conditions.

The end of coding and system checking on a machine are not the final steps of dev procedure. The success among end users depends on a number of factors. To reach better results, you should not make a choice to shift left or shift right. You should apply both these approaches.

In order to understand “what is going on”, shift-left is used. It significantly shortens the time needed for test creation and increases the total quality of the software under development. Shift-left provides the proper work of system from the business side. Shift-right focuses on end users needs and ensures positive user experience.


What tool to use for test automation in 2017?

$
0
0

According to the Gartner report Magic Quadrant for Software Test Automation, by 2020 Selenium WebDriver will become a standard tool for automated functional tests. But in fact, Selenium is already considered to be a standard tool for web testing automation. So, the vendors have to implement tools like Selenium now and not in 3 years.

Also, Gartner Magic Quadrant for Software Test Automation shows that on the market there are giants of test automation. Among the leaders are Hewlett-Packard Enterprise (HPE), IBM, Tricentis and Worksoft, Oracle TestPlant, SmartBear and others.
Based on the Gartner report, by 2020 50% of companies will use open-source tools for software testing because of growing DevOps segment. The tendency of open-source transformation is clearly seen and automation is constantly growing.

While selecting the tools for test automation, especially for beginners, it’s useful, especially for beginners, to take into account the above stated facts. The IT market is not stable and the directions and tendencies of its development are changing quickly. But one thing remains the same – the automation tools are useless if the efficiency they provide is lower than the expenses on autotests creation and maintainability.
Now we are going to review several of the most popular automation tools in 2017.

Selenium

The top of almost every rank is Selenium. It is an open-source cross-platform framework for testing web applications. Auto scripts can be written in Java, C#, Python, PHP, Ruby, Perl and JavaScript. The framework includes Selenium Grid, Selenium Remote Control and Selenium IDE. Having rather high capacities, Selenium can be used for performance testing of web-based products.

Katalon Studio

Based on Selenium and Appium frameworks, Katalon Studio is an open-source tool for testing mobile and web applications. Moreover, the tool supports API testing on different operating systems. It possesses user-friendly IDE, supports object spy and object repository and browser plugin. Katalon Studio can be integrated with GIT, Jira and Jenkins. The tests are generated automatically by using built-in keywords.

IBM Rational Functional Tester

Rational Functional Tester (RFT) created by IBM is a commercial tool for test automation and regression tests execution. Besides, it can be used for GUI, functional and data-driven testing. The tool supports testing of applications like .Net, Java, SAP, Siebel, Ajax, Dojo and others. It can be easily integrated with other software including IBM test management tool Rational Quality Manager.

TestComplete

TestComplete is a flexible tool for automated testing of web, mobile and desktop applications. Also, it is suitable for data-driven testing and keyword-driven testing. The tool supports custom plugins and extensions. It possesses the record-and-play-feature. During UI testing, the issues can be detected using logs, capture images and video files.

The choice of test automation tool depends on a number of factors. There is no universal framework or tool that satisfies all the requirements and needs of QA team. Having analyzed the product specifics, client’s requirements and testing goals, you will be able to choose the proper test automation tool among a large pool of available frameworks for testing.

API Economy: How to Build Secure Business on Platform

$
0
0

According to IT research agency International Data Corporation (IDC), the global Internet of Things (IoT) market will reach $7.1 trillion by 2020. The development of IoT area requires the correspondent growth of Application Programming Interface (API) as a mean of communication between different programs, platforms and applications. While IoT integration is possible due to APIs.

Being a set of system modules, API enables the delivery of new products without its actual development. API gives all necessary basis for integration with external providers of various services. Besides, it is important for GUI (Graphical user Interface) customization. Such large corporations as Amazon Web Services, Facebook, Google and Twitter have their APIs available for third-party providers.

IoT together with APIs has given the possibility for new business channels to appear. Newly created digital society leads to merging the real physical word and the virtual one. We already live in API economy where companies build their business models and strategies based on API and IoT technologies trends.

The API economy turns companies, organizations and businesses into platform. Rather a good example is Uber – a company that builds its business on the platform. Through API, the application connects drivers and passengers using Google Maps.

In order to turn the business to platform, the company needs three main things: digital business models, business model platforms and business ecosystems. But the actual shift starts from changing company’s culture and internal organization of working processes.

Recommendations to turn business into platform

  • The core attribute of every business built on platform is openness with the team, customers and users.
    A bimodal approach – dividing of management into Front Office and Back Office systems – ensures the proper launch of business model platform and creation of the ecosystem.
  • To turn business into platform requires new technology assets and accordingly assets management system to catalogue, process and track various data and algorithms.
  • Business model platforms should be oriented to organizational structure and not to product. Business opportunities provided by API bring major value.
  • Risk management and system security should be essential aspects of every business, especially for business turning into platform.
  • Security is an essential attribute for APIs because there is a number of security holes.

Being fully focused on functional and features, developers may accidentally open the door to corporate and customers data. API requires specialists to think outside the box as hackers do. In order to protect APIs, vendors keep to the standard – Internet Engineering Task Force’s OAuth. However, the standard is based on HTTP that has own flaws.

APIs are rather complex systems and they support a big number of connections. Besides that, new software is released ASAP. So, it is quite difficult to write secure code. According to data provided by the researchers of the University of Virginia, 67% of applications available in App Store have security vulnerabilities – customer’s credentials can be stolen.

Add-ons available for APIs hide more threats. For example, social networks and mobile solutions allow third-party service providers to add functionality to the basic system. In such a case, developers get access privileges and are able to manage system admin functionality. This causes new vulnerabilities.

To protect the system, developers follow a multi-pronged approach – the procedures of authentication and authorization are multistep and include biometric solutions, e.g. fingerprints. Also, during security testing, the main focus is on front-end but back-end hides many security holes too.

The development of new functionality requires spending 5-10% of total project budget on security testing in order to avoid huge losses in future.

How not to miss the last bug?

$
0
0

After product release, users find a serious bug despite much time and efforts spent on coding and testing. How and why does this happen? Now we are going to find out most common reasons why bugs are omitted.

No doubts that because of human nature a number of system issues are possible. Nothing and nobody is perfect. The matter is that we all suffer from so-called tunnel thinking or cognitive tunneling. That means in particular situations, usually in stressful ones, we stay focused on one particular aspect or notion without seeing the whole picture. And because of this, obvious bugs can be missed.
Nevertheless, there are many other factors that are not connected with human nature and that prevent to detect bugs.

Pesticide Paradox

Boris Beizer suggested the idea of the Pesticide Paradox: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.” To avoid this paradox, testers should monitor the product modifications and update tests. You cannot reuse the same test scenarios to check various products or their different versions.

Time limitations

Coming deadline and a large scope of work are the tester’s nightmare. It is very difficult to stay attentive in such a chaos. Rushed testing causes many troubles and leads to higher expenses – severe bugs are omitted and their fixing after the release will cost more. The product team as well as the test one has to wisely plan time and schedule tasks in order not to miss serious software errors.

Test strategy

Sometimes testing team may apply wrong checking strategy because of different reasons, for example, lack of experience. The specialists try to select the most bug-triggering scenarios and combinations of devices, OS versions, browsers, etc. They should imitate the common user behavior and try to predict some unusual actions. Also, lack of documentation and requirements makes it more complicated to select proper test strategy.

Unfixed bugs

Even if a bug is detected and reported, it does not necessary mean that it will be fixed. Some tickets may be simply forgotten or lost in the backlog. Testers should monitor that all found errors are removed.

To miss a serious bug after the product release is very frustrating for testing team. And we should not blame only QA and testers because of missed bugs. The product of a high quality with minimum bugs is the result of strong collaboration between all members of project team.

How to Stop Annoying Project Team with Improper Test Cases?

$
0
0

Properly designed test cases promote the creation of top-quality products and solutions by increasing the productivity and work efficiency of the whole team. Poorly written test cases may lead to misunderstandings between team members and cause time wasting. But proper ones ensure flawless activities and smooth release. How to write effective test cases? This article will give you the answer.

What is a test case?

In general, test case is a set of specific actions that testers perform in order to detect any system errors and misoperation of software components. Several test cases form a test scenario.

For example, you should test a login procedure. This is test scenario. The way you are going to verify the procedure is a set of test cases. First, you input valid email – one case. Then you check system behavior by entering invalid inputs – another case. And so on till you cover most common cases. But do not forget about untypical ones because sometimes user behavior can be unpredictable.

Why do we need to write test cases?

Quality of software products requires developers, testers and other specialists engaged in software creation to be very specific. By designing test cases, QA specialists verify product functionality and capacities step by step in accordance to specifications and written requirements.

Test cases help to monitor and track verified system aspects. Usually, they are created before the actual beginning of testing procedure. QA team designs drafts of test cases to which they will add the actual result of checking.

What information should every test case include?

To avoid the duplication of test cases, QA specialists should support every test case with the information that helps to specify testing conditions.

Every test case should start with short description of requirement that will be under test. Then tester explains how he will verify this particular system aspect. Do not forget to specify testing environment: OS version, software build, security access, data and time, etc. Developers need these details to reproduce bug and fix it.

Test cases include inputs (valid / invalid) and correspondent outputs. Proper system behavior is well-documented in different types of specifications and by client’s requirements. This is expected system behavior. If the system operates improperly, tester should add some visual proofs if possible.

All test cases consist of particular parts that include all the above-mentioned information.

What are standard attributes of test cases?

  • Unique Identification Number (ID)
  • Purpose – feature, component or capacity to be tested
  • Prerequisite – testing environment conditions
  • Test data – input values
  • Test steps – sequence of steps needed to reproduce test case
  • Expected results – system behavior proper according to design
  • Actual results – actual system behavior after following the specified steps
  • Result – ‘pass’ or ‘fail’ indicator
  • Comments – additional information, e.g. screenshots, video, descriptions, etc.

How to write effective test cases?

QA specialists design test cases to help other team member clearly understand and easily reproduce tests. So cases should be short, simple and include all necessary information. Much pointless text steals time of both testers and developers and cause new misinterpretations. Assertive language ensures the proper understanding of specified steps. Be accurate and precise.

Try to be critical. While writing a test case, user-oriented approach is essential for qualitative check. After the test case is ready, review it from the perspective of tester. Check whether the person not connected with the project will properly understand the case.

Due to Traceability Matrix, testers do not miss to verify any software requirement. As the number of test cases to cover all system functionality may be endless, QA team implements testing techniques: Boundary Value Analysis (BVA), Equivalence Partition (EP), State Transition and Error Guessing Techniques. They assist in detecting bugs spending less time.

To manage test cases in spreadsheets is not an effective practice. QA team uses special test case management tools that simplify test case creation, management and maintenance. The most popular tools are JIRA and Quality Center.

Do not Let the Cloud Turn to ‘Thunder’

$
0
0

Generating new traffic, providing new services to be sold and increasing the profit, the cloud grows fast and requires new business models and coordinated work of several players. In order to achieve success at the market, the quality of cloud solutions and services should be very high.

Being a platform for running applications over the network, cloud involves the collaboration of three players that will together ensure proper operation of the solution. First of all, a cloud provider enables the cloud services. In order to ensure a smooth and flawless interaction between end users and the cloud, a communication service provider called cloud carrier is required. Buying the services offered by the cloud provider, a cloud consumer gives users an opportunity to utilize the specially developed cloud solutions.

The level of user experience depends mainly on the cloud provider and carrier. They both define the key performance indicators (KPIs) including service-oriented and resource ones that affect the quality of ready cloud solution. For example, unstable service availability, unmanaged processor utilization and long delivery time are the signals of poor service quality that causes negative user experience.

According to the National Institute of Standards (NIST), cloud computing should possess five main attributes, e.g., self-service on-demand, measured service, rapid elasticity, broad network access, resource pooling, to provide their key benefits. Cloud computing helps the consumer to reduce expenses as he pays only for those resources that he uses and the payment is monthly based. Besides, the development procedure does not require additional in-house teams and own separate infrastructure.

In order to establish and maintain a strong and competitive business, cloud solutions should overpass complex assessment of their quality including the verification of software operation as well as data centers and cloud.

How to test?

The cloud testing life cycle includes the following steps:

  • design of test scenarios
  • development of test cases
  • choice of a cloud service provider
  • infrastructure setting
  • cloud servers leverage
  • test run
  • analysis of test results

During testing in the cloud, QA team faces several challenges that can be solved by applying particular techniques and methods. As a number of companies / users can utilize the clouded data, testers verify its availability and accessibility without any delays. Checking provider capacities, e.g., the efficiency and assurance of services, is an inevitable part of cloud testing.

The proper work of cloud solution requires a smooth operation of server, network, database and software. Test cases should cover not only the functionality of every component but also their location. Besides, in order to detect problems, testers check how data is cached in the client.

By verifying the system integrability, QA team ensures a flawless connection of the solution with third-party software. Interoperability issues may cause application crashes, breakdown of both server and network.

In order to minimize the issues and ensure a high profit revenue, each cloud solution should overpass complex testing. Besides, as cloud computing becomes more popular, more and more cloud solutions appear, thus, market competition increases. So, a top quality is among the main factors that guarantee market success.

Viewing all 102 articles
Browse latest View live




Latest Images