If My Wounds Were Visible

Me too. Chruch too. If only my wounds were visible. For survivors of narcissistic, physical and emotional abuse a reminder we are never alone.

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Quality Assurance for Beginners

While communicating with colleagues or clients or within testing team, we commonly use vocabulary like “unit testing, “functional testing”, regression testing”,” system testing”, “test policies”, etc. If we communicate the same to a person who is not a test professional we need to explain in detail each and every term. So in this case communication becomes so difficult and painful.

To speak the language of testing, you need to learn its vocabulary. And this article has been created to define the use of basic vocabulary among Quality Assurance activity.

Authentication: is determining whether someone or something is, in fact, who or what it is declared to be. Authentication is a process in which the credentials provided by a person are compared to those on authentication server. If the credentials match, the process is completed and the user is granted authorization for access.

Authorization: Is the process of giving someone permission to do or have something. In multi-user computer systems, a system administrator defines for the system which users are allowed access to the system and what privileges of use (such as access to which file directories, hours of access, amount of allocated storage space, and so forth).

Component: A minimal software item for which a separate specification is available.

Defect (Error): Nonconformance to requirements or program specification or common sence. Also human action that results in software containing a fault.

Release Candidate: A pre-release version, which contains the desired functionality of the final version, and needs to pass regression tests (which ideally should be removed before the final version is released).

Use Case: The specification of application usage that is conducted from the end-user perspective.

Bug: A fault in a code which causes the program to perform in an unintended or unanticipated manner. Also an issue type in bug tracking systems

Code Complete: The system is essentially done. The code phase was more or less over, and full system testing could proceed in earnest. The developers should consider themselves “done”, with only unknown bugs to repair, etc.

Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional tests.

Code Inspection: Is the most formal type of review, which is a kind of static testing to avoid the defect multiplication at a later stage. The main purpose of code inspection is to find defects and it can also spot any process improvement if any.

Code Review: A systematic examination of application source code. It is intended to find mistakes overlooked in the initial development phase, improving the overall quality of software.

Code Review example in FishEye tool

Dynamic Analysis: Is testing and evaluation of a program by executing data in real-time. The objective is to find errors in a program while it is running, rather than by repeatedly examining the code offline.

Acceptance Test Driven Development: A development methodology based on communication between business customers, developers, and testers.

Behaviour Driven Development: A synthesis and refinement of practices stemming from Test Driven Development and Acceptance Test Driven Development. More information here.

Test Driven Development: Testing methodology associated with Agile Programming in which tests written prior code and they fail when written and then start passing when code ready

Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified business requirements. To ensure that the product actually meets the user’s needs, and that the specifications were correct in the first place. Validation gives answer on question: Are we building the right product?

Verification: The process of evaluating work-products (not the actual final product) of a development phase to determine whether they meet the specified requirements for that phase. To ensure that the product is being built according to the requirements and design specifications. The techniques for verification are testing, inspection and reviewing. Verification gives answer on question: Are we building the product right?

Test Case: A set of inputs, execution preconditions, and expected outcomes created for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. A Test Case will consist of information such as requirements, test steps, verification steps, prerequisites, outputs, test environment, etc.

Test Environment: The hardware and software environment in which tests will be executed, and any other software with which the software under test interacts when under test including stubs and test drivers.

Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.

Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool. Simply, it’s an automated test case.

Test Suite: A collection of tests used to validate the behavior of a product.

Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.

Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed basic acceptance criteria.

Accessibility Testing: Verifying a product is accessible to the people having disabilities (e.g. blind).

Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

Alpha Testing: Testing of an early working prototype of a software product.

Automated Testing: Test case execution using Test Tools. Can be applied in GUI, performance, API, etc. testing.

Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.

Beta Testing: Testing of a rerelease of a software product conducted by a specially selected group of customers.

Black-Box Testing: A test technique that focuses on testing the functionality of the program component or application against its specifications without knowlegde of how the system constructed.

Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested.

Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.

Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking (deadlocking) and use of single-threaded code.

Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique for Automated Testing.

Dependency Testing: Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution. Usually it’s a Load Testing.

End-to-End Testing: Testing a complete application environment in a situation that mimics real-world use.

Functional Testing: Testing the features and operational behavior of a product to ensure they correspond to its specifications. This type of testing ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. Usually Functional testing and Black Box Testing terms used interchangeably.

Glass Box Testing: A synonym for White Box Testing.

Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal details.

Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant for distributed systems.

Load Testing: See Performance Testing. More details here.

Life Cycle Testing: The process of verifying the consistency, completeness, and correctness of software at each stage of the development life cycle.

Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash.

Negative Testing: Testing aimed at showing software gracefully process invalid input data. Also known as “test to fail”.

E.g. You try to register a new user which name is already exist in the system

Nonfunctional Testing: Set of Performance, Load, Usability, Supportability, Security, Safety Testing.

Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool and on about almost maximum load.

Positive Testing: Testing aimed at showing software works properly. Also known as “test to pass”.

Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions. Sometimes this type of testing is called “stress testing.”

Regression Testing: Verification, that new changes/features did not affect existing features.

Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.

Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

Smoke Testing: A quick testing for the major functions of software.

Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load with goal to overload handling.

System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

Usability Testing: Testing the ease with which users can use a product.

Unit Testing: Testing of individual software components’s functions.

JUnit test script example

Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

White Box Testing: Testing based on an analysis of internal workings and structure of software code. Also known as Glass Box Testing. Totally different from Black Box Testing.

Audit: An inspection/assessment activity that verifies compliance with plans, policies and procedures and ensures that resources are conserved.

Metric: A standard of measurement. A metric should be a real objective measurement of something such as number of concurrent users for a peak hours.

Quality Assurance: Planned and systematic actions necessary to provide a confidence that a product is fulfilling customers’ expectations.

Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality. Testing is a quality control activity.

Quality Management: That aspect of the overall management function that determines and implements the quality policy.

Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.

You’re welcome to comment or request additional terms.

Thank you!

Add a comment

Related posts:

Update on EOS Mainnet Swap

Within 23 hours after June 2nd 4:29:29 AM IST, 2018, all ERC20 EOS tokens will become fixed and non-transferrable on Ethereum blockchain. It means that no one will be able to transfer EOS tokens from…

Journal Entry 8

Today we started discussing ideas of idealism, modern leadership as well as activism. We read and studied the writings of Ida B. Wells- Barnett. Ida B. Wells- Barnett was a Teacher and a journalist…

Blog Post 301

I learned that an API is how we deliver information from the client to the server. I really liked the analogy that our textbook used where the API was a waiter carrying food from the kitchen to the…