Software Development Phase Details and Associated Tools
Requirement Phase
The purpose of requirements management is to ensure product development goals are successfully met. It is a set of techniques for documenting, analyzing, prioritizing, and agreeing on requirements so that engineering teams always have current and approved requirements. Requirements management provides a way to avoid errors by keeping track of changes in requirements and fostering communication with stakeholders from the start of a project throughout the engineering lifecycle.
The importance of requirements management
The Internet of Things (IoT) is changing not only the way products work, but their design and development. Products are continuously becoming more complex with more lines of code and additional software — some of which allow for even greater connectivity. With requirements management, you can overcome the complexity and interdependencies that exist in today’s engineering lifecycles to streamline product development and accelerate deployment.
Issues in requirements management are often cited as major causes of project failures.
Having inadequately defined requirements can result in scope creep, project delays, cost overruns, and poor product quality that does not meet customer needs and safety requirements.
Having a requirements management plan is critical to the success of a project because it enables engineering teams to control the scope and direct the product development lifecycle. Requirements management software provides the tools for you to execute that plan, helping to reduce costs, accelerate time to market and improve quality control.
Requirement management planning and process
Requirements management plan (RMP)
A requirements management plan (RMP) helps explain how you will receive, analyze, document and manage all of the requirements within a project. The plan usually covers everything from initial information gathering of the high-level project to more detailed product requirements that could be gathered throughout the lifecycle of a project. Key items to define in a requirements management plan are the project overview, requirements gathering process, roles and responsibilities, tools, and traceability.
Requirements management process
When looking for requirements management tools, there are a few key features to look for.
A typical requirements management process complements the systems engineering V model through these steps:
Collect initial requirements from stakeholders
Analyze requirements
Define and record requirements
Prioritize requirements
Agree on and approve requirements
Trace requirements to work items
Query stakeholders after implementation on needed changes to requirements
Utilize test management to verify and validate system requirements
Assess impact of changes
Revise requirements
Document changes
By following these steps, engineering teams are able to harness the complexity inherent in developing smart connected products. Using a requirements management solution helps to streamline the process so you can optimize your speed to market and expand your opportunities while improving quality.
Digital requirements management
Digital requirements management is a beneficial way to capture, trace, analyze and manage requirements changes. Digital management ensures changes are tracked in a secure, central location, and it allows for strengthened collaboration between team members. Increased transparency minimizes duplicate work and enhances agility while helping to ensure requirements adhere to standards and compliance.
Requirements attributes
In order to be considered a “good” requirement, a requirement should have certain characteristics, which include being:
Specific
Testable
Clear and concise
Accurate
Understandable
Feasible and realistic
Necessary
Sets of requirements should also be evaluated and should be consistent and nonredundant.
Benefits of requirements management
Some of the benefits of requirements management include:
Lower cost of development across the lifecycle
Fewer defects
Minimized risk for safety-critical products
Faster delivery
Reusability
Traceability
Requirements being tied to test cases
Global configuration management
For more information please visit: https://www.ibm.com/topics/what-is-requirements-management
RM Tools
IBM® Engineering Requirements Management DOORS® Family is a requirements management application for optimizing requirements communication, collaboration and verification throughout your organization and supply chain. The application allows you to create relationships, trace dependencies, empower multiple teams to collaborate in near real-time and handle versioning and change management. IBM DOORS Family is a scalable solution that can help you meet business goals by managing project scope and cost.
For more information please visit: https://www.ibm.com/topics/what-is-requirements-management
Architecture & Design Phase
As per, https://en.wikipedia.org/wiki/Software_design
Software design is the process by which an agent creates a specification of a software artifact intended to accomplish goals, using a set of primitive components and subject to constraints.[1] The term is sometimes used broadly to refer to "all the activity involved in conceptualizing, framing, implementing, commissioning, and ultimately modifying" the software, or more specifically "the activity following requirements specification and before programming
Software design is the process of envisioning and defining software solutions to one or more sets of problems. One of the main components of software design is the software requirements analysis (SRA). SRA is a part of the software development process that lists specifications used in software engineering.
If the software is "semi-automated" or user centered, software design may involve user experience design yielding a storyboard to help determine those specifications. If the software is completely automated (meaning no user or user interface), a software design may be as simple as a flow chart or text describing a planned sequence of events. There are also semi-standard methods like Unified Modeling Language and Fundamental modeling concepts. In either case, some documentation of the plan is usually the product of the design. Furthermore, a software design may be platform-independent or platform-specific, depending upon the availability of the technology used for the design.
The main difference between software analysis and design is that the output of a software analysis consists of smaller problems to solve. Additionally, the analysis should not be designed very differently across different team members or groups. In contrast, the design focuses on capabilities, and thus multiple designs for the same problem can and will exist. Depending on the environment, the design often varies, whether it is created from reliable frameworks or implemented with suitable design patterns. Design examples include operation systems, webpages, mobile devices or even the new cloud computing paradigm.
Tools: Rational Software Architect Designer
Built on the extensible Eclipse platform, IBM® Rational® Software Architect Designer provides a broad range of design and development tools that you can use to rapidly create, evaluate, and communicate software architectures and designs.
You can use Rational Software Architect Designer to perform the following tasks:
Design and analyze applications at higher levels of abstraction.
Specify and maintain key aspects of your service, framework, application, and deployment architectures.
Collaborate more effectively with your team members, communicate more effectively with your project stakeholders, and help to ensure that outcomes fulfill requirements.
Reduce implementation times by generating code and other runtime artifacts.
Foster re-use of common solution architectures to simply application and data center complexity.
The Rational Software Architect family provides architecture and design tools that span the application lifecycle from capturing initial ideas, defining solution architectures, planning your SOA, designing lower level application details, and planning and automating deployments.
You can design a software application using a variety of modeling and design languages supported by Rational Software Architect Designer such as: sketching, Business Process Model and Notation (BPMN), UML and domain specific UML extensions like SoaML and UPIA, and Deployment Planning.
To reduce implementation times and improve quality, you can transform your models into Java™ or C++ source code, runtime artifacts like WSDL files, and configuration files using the transformations provided with Rational Software Architect Designer or, your own customized transformations that let you target your unique architectures, frameworks, and coding standards.
Rational Software Architect Designer extension for SOA helps you design and deliver Java Enterprise Edition solutions using SOA and targeting IBM WebSphere Application Server and WebSphere Portal environments.
You can refine the details of the code using the underlying Eclipse integrated development environment (IDE) and specialized Java EE development tools that come with IBM Rational Software Architect Designer for WebSphere® Software. Combine conceptual modeling and concrete (code-level) modeling through various flexible process options to manage the relationship between your evolving designs and implementations.
As work on your project progresses, you can ensure that the solutions will be readily deployable by using the capabilities of the built-in technical and deployment architecture platform. With these built-in capabilities, you can specify, through all layers of the technology stack, the capabilities of the down-level layer and the requirements of the up-level layer. It steps you through the process of ensuring that the software and infrastructure correlate with their requirements and capabilities. You can correlate requirements and capabilities for multiple target deployment environments (integration testing, performance testing, staging, and production). By bridging the communication gap between the development team and the IT operations team, you protect yourself from costly and frustrating rework when problems go undiscovered at deployment time.
For more information please visit: https://www.ibm.com/docs/en/rational-soft-arch/9.7.0?topic=designer-rational-software-architect-product-overview
Software Coding & Testing Phase:
Software Coding Phase:
A programming tool or software development tool is a computer program that software developers use to create, debug, maintain, or otherwise support other programs and applications. The term usually refers to relatively simple programs, that can be combined to accomplish a task, much as one might use multiple hands to fix a physical object. The most basic tools are a source code editor and a compiler or interpreter, which are used ubiquitously and continuously. Other tools are used more or less depending on the language, development methodology, and individual engineer, often used for a discrete task, like a debugger or profiler. Tools may be discrete programs, executed separately – often from the command line – or may be parts of a single large program, called an integrated development environment (IDE). In many cases, particularly for simpler use, simple ad hoc techniques are used instead of a tool, such as print debugging instead of using a debugger, manual timing (of overall program or section of code) instead of a profiler, or tracking bugs in a text file or spreadsheet instead of a bug tracking system.
Software Testing Phase
https://users.ece.cmu.edu/~koopman/des_s99/sw_testing/
Software Testing is the process of executing a program or system with the intent of finding errors. [Myers79] Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. [Hetzel88] Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible. [Rstcorp]
Unlike most physical systems, most of the defects in software are design errors, not manufacturing defects. Software does not suffer from corrosion, wear-and-tear -- generally it will not change until upgrades, or until obsolescence. So once the software is shipped, the design defects -- or bugs -- will be buried in and remain latent until activation.
Software bugs will almost always exist in any software module with moderate size: not because programmers are careless or irresponsible, but because the complexity of software is generally intractable -- and humans have only limited ability to manage complexity. It is also true that for any complex systems, design defects can never be completely ruled out.
Regardless of the limitations, testing is an integral part in software development. It is broadly deployed in every phase in the software development cycle. Typically, more than 50% percent of the development time is spent in testing. Testing is usually performed for the following purposes:
To improve quality.
As computers and software are used in critical applications, the outcome of a bug can be severe. Bugs can cause huge losses. Bugs in critical systems have caused airplane crashes, allowed space shuttle missions to go awry, halted trading on the stock market, and worse. Bugs can kill. Bugs can cause disasters. The so-called year 2000 (Y2K) bug has given birth to a cottage industry of consultants and programming tools dedicated to making sure the modern world doesn't come to a screeching halt on the first day of the next century. [Bugs] In a computerized embedded world, the quality and reliability of software is a matter of life and death.
Quality means the conformance to the specified design requirement. Being correct, the minimum requirement of quality, means performing as required under specified circumstances. Debugging, a narrow view of software testing, is performed heavily to find out design defects by the programmer. The imperfection of human nature makes it almost impossible to make a moderately complex program correct the first time. Finding the problems and get them fixed [Kaner93], is the purpose of debugging in programming phase.
For Verification & Validation (V&V)
Just as topic Verification and Validation indicated, another important purpose of testing is verification and validation (V&V). Testing can serve as metrics. It is heavily used as a tool in the V&V process. Testers can make claims based on interpretations of the testing results, which either the product works under certain situations, or it does not work. We can also compare the quality among different products under the same specification, based on results from the same test.
We can not test quality directly, but we can test related factors to make quality visible. Quality has three sets of factors -- functionality, engineering, and adaptability. These three sets of factors can be thought of as dimensions in the software quality space. Each dimension may be broken down into its component factors and considerations at successively lower levels of detail. Table 1 illustrates some of the most frequently cited quality considerations.
There is a plethora of testing methods and testing techniques, serving multiple purposes in different life cycle phases. Classified by purpose, software testing can be divided into: correctness testing, performance testing, reliability testing and security testing. Classified by life-cycle phase, software testing can be classified into the following categories: requirements phase testing, design phase testing, program phase testing, evaluating test results, installation phase testing, acceptance testing and maintenance testing. By scope, software testing can be categorized as follows: unit testing, component testing, integration testing, and system testing.
Correctness testing
Correctness is the minimum requirement of software, the essential purpose of testing. Correctness testing will need some type of oracle, to tell the right behavior from the wrong one. The tester may or may not know the inside details of the software module under test, e.g. control flow, data flow, etc. Therefore, either a white-box point of view or black-box point of view can be taken in testing software. We must note that the black-box and white-box ideas are not limited in correctness testing only.
Black-box testing
The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure. [Perry90] It is also termed data-driven, input/output driven [Myers79], or requirements-based [Hetzel88] testing. Because only the functionality of the software module is of concern, black-box testing also mainly refers to functional testing -- a testing method emphasized on executing the functions and examination of their input and output data. [Howden87] The tester treats the software under test as a black box -- only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs. In testing, various inputs are exercised and the outputs are compared against specification to validate the correctness. All test cases are derived from the specification. No implementation details of the code are considered.
It is obvious that the more we have covered in the input space, the more problems we will find and therefore we will be more confident about the quality of the software. Ideally we would be tempted to exhaustively test the input space. But as stated above, exhaustively testing the combinations of valid inputs will be impossible for most of the programs, let alone considering invalid inputs, timing, sequence, and resource variables. Combinatorial explosion is the major roadblock in functional testing. To make things worse, we can never be sure whether the specification is either correct or complete. Due to limitations of the language used in the specifications (usually natural language), ambiguity is often inevitable. Even if we use some type of formal or restricted language, we may still fail to write down all the possible cases in the specification. Sometimes, the specification itself becomes an intractable problem: it is not possible to specify precisely every situation that can be encountered using limited words. And people can seldom specify clearly what they want -- they usually can tell whether a prototype is, or is not, what they want after they have been finished. Specification problems contributes approximately 30 percent of all bugs in software. [Beizer95]
The research in black-box testing mainly focuses on how to maximize the effectiveness of testing with minimum cost, usually the number of test cases. It is not possible to exhaust the input space, but it is possible to exhaustively test a subset of the input space. Partitioning is one of the common techniques. If we have partitioned the input space and assume all the input values in a partition is equivalent, then we only need to test one representative value in each partition to sufficiently cover the whole input space. Domain testing [Beizer95] partitions the input domain into regions, and consider the input values in each domain an equivalent class. Domains can be exhaustively tested and covered by selecting a representative value(s) in each domain. Boundary values are of special interest. Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not. Boundary value analysis [Myers79] requires one or more boundary values selected as representative test cases. The difficulties with domain testing are that incorrect domain definitions in the specification can not be efficiently discovered.
Good partitioning requires knowledge of the software structure. A good testing plan will not only contain black-box testing, but also white-box approaches, and combinations of the two.
White-box testing
Contrary to black-box testing, software is viewed as a white-box, or glass-box in white-box testing, as the structure and flow of the software under test are visible to the tester. Testing plans are made according to the details of the software implementation, such as programming language, logic, and styles. Test cases are derived from the program structure. White-box testing is also called glass-box testing, logic-driven testing [Myers79] or design-based testing [Hetzel88].
There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once (statement coverage), traverse every branch statements (branch coverage), or cover all the possible combinations of true and false condition predicates (Multiple condition coverage). [Parrington89]
Control-flow testing, loop testing, and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code -- code that is of no use, or never get executed at all, which can not be discovered by functional testing.
In mutation testing, the original program code is perturbed and many mutated programs are created, each contains one fault. Each faulty version of the program is called a mutant. Test data are selected based on the effectiveness of failing the mutants. The more mutants a test case can kill, the better the test case is considered. The problem with mutation testing is that it is too computationally expensive to use. The boundary between black-box approach and white-box approach is not clear-cut. Many testing strategies mentioned above, may not be safely classified into black-box testing or white-box testing. It is also true for transaction-flow testing, syntax testing, finite-state testing, and many other testing strategies not discussed in this text. One reason is that all the above techniques will need some knowledge of the specification of the software under test. Another reason is that the idea of specification itself is broad -- it may contain any requirement including the structure, programming language, and programming style as part of the specification content.
We may be reluctant to consider random testing as a testing technique. The test case selection is simple and straightforward: they are randomly chosen. Study in [Duran84] indicates that random testing is more cost effective for many programs. Some very subtle errors can be discovered with low cost. And it is also not inferior in coverage than other carefully designed testing techniques. One can also obtain reliability estimate using random testing results based on operational profiles. Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies.
Performance testing
Not all software systems have specifications on performance explicitly. But every system will have implicit performance requirements. The software should not take infinite time or infinite resource to execute. "Performance bugs" sometimes are used to refer to those design problems in software that cause the system performance to degrade.
Performance has always been a great concern and a driving force of computer evolution. Performance evaluation of a software system usually includes: resource usage, throughput, stimulus-response time and queue lengths detailing the average or maximum number of tasks waiting to be serviced by selected resources. Typical resources that need to be considered include network bandwidth requirements, CPU cycles, disk space, disk access operations, and memory usage [Smith90]. The goal of performance testing can be performance bottleneck identification, performance comparison and evaluation, etc. The typical method of doing performance testing is using a benchmark -- a program, workload or trace designed to be representative of the typical system usage. [Vokolos98]
Reliability testing
Software reliability refers to the probability of failure-free operation of a system. It is related to many aspects of software, including the testing process. Directly estimating software reliability by quantifying its related factors can be difficult. Testing is an effective sampling method to measure software reliability. Guided by the operational profile, software testing (usually black-box testing) can be used to obtain failure data, and an estimation model can be further used to analyze the data to estimate the present reliability and predict future reliability. Therefore, based on the estimation, the developers can decide whether to release the software, and the users can decide whether to adopt and use the software. Risk of using software can also be assessed based on reliability information. [Hamlet94] advocates that the primary goal of testing should be to measure the dependability of tested software.
There is agreement on the intuitive meaning of dependable software: it does not fail in unexpected or catastrophic ways. [Hamlet94] Robustness testing and stress testing are variances of reliability testing based on this simple criterion.
The robustness of a software component is the degree to which it can function correctly in the presence of exceptional inputs or stressful environmental conditions. [IEEE90] Robustness testing differs with correctness testing in the sense that the functional correctness of the software is not of concern. It only watches for robustness problems such as machine crashes, process hangs or abnormal termination. The oracle is relatively simple, therefore robustness testing can be made more portable and scalable than correctness testing. This research has drawn more and more interests recently, most of which uses commercial operating systems as their target, such as the work in [Koopman97] [Kropp98] [Ghosh98] [Devale99] [Koopman99].
Stress testing, or load testing, is often used to test the whole system rather than the software alone. In such tests the software or system are exercised with or beyond the specified limits. Typical stress includes resource exhaustion, bursts of activities, and sustained high loads.
Security testing
Software quality, reliability and security are tightly coupled. Flaws in software can be exploited by intruders to open security holes. With the development of the Internet, software security problems are becoming even more severe.
Many critical software applications and services have integrated security measures against malicious attacks. The purpose of security testing of these systems include identifying and removing software flaws that may potentially lead to security violations, and validating the effectiveness of security measures. Simulated security attacks can be performed to find vulnerabilities.
Software testing can be very costly. Automation is a good way to cut down time and cost. Software testing tools and techniques usually suffer from a lack of generic applicability and scalability. The reason is straight-forward. In order to automate the process, we have to have some ways to generate oracles from the specification, and generate test cases to test the target software against the oracles to decide their correctness. Today we still don't have a full-scale system that has achieved this goal. In general, significant amount of human intervention is still needed in testing. The degree of automation remains at the automated test script level.
Development Support and Tracking Tools
SVN: Configuration Management Tool
About TortoiseSVN (https://tortoisesvn.net/about.html)
TortoiseSVN is a really easy to use Revision control / version control / source control software for Windows. It is based on Apache™ Subversion (SVN)®; TortoiseSVN provides a nice and easy user interface for Subversion.
It is developed under the GPL. Which means it is completely free for anyone to use, including in a commercial environment, without any restriction. The source code is also freely available, so you can even develop your own version if you wish to.
Since it's not an integration for a specific IDE like Visual Studio, Eclipse or others, you can use it with whatever development tools you like, and with any type of file.
Easy to use
all commands are available directly from the Windows Explorer.
only commands that make sense for the selected file/folder are shown. You won't see any commands that you can't use in your situation.
See the status of your files directly in the Windows explorer
descriptive dialogs, constantly improved due to user feedback
allows moving files by right-dragging them in the Windows explorer
All Subversion protocols are supported
http://
https://
svn://
svn+ssh://
file:///
svn+XXX://
integrated spell checker for log messages
auto completion of paths and keywords of the modified files
text formatting with special chars
The big picture
Can create a graph of all revisions/commits. You can then easily see where you created a tag/branch or modified a file/folder
Graphs of commit statistics of the project
Per project settings
minimum log message length to avoid accidentally committing with an empty log message
language to use for the spell checker
TortoiseSVN provides a flexible mechanism to integrate any web based bug tracking system.
A separate input box to enter the issue number assigned to the commit, or coloring of the issue number directly in the log message itself
When showing all log messages, an extra column is added with the issue number. You can immediately see to which issue the commit belongs to.
Issue numbers are converted into links which open the webbrowser directly on the corresponding issue
Optional warning if a commit isn't assigned to an issue number
Helpful Tools
Shows changes you made to your files
Helps resolving conflicts
Can apply patchfiles you got from users without commit access to your repository
TortoiseBlame: to show blames of files. Shows also log messages for each line in a file.
TortoiseIDiff: to see the changes you made to your image files
SubWCRev: to include the revision numbers/dates/... into your source files
Available in many languages
TortoiseSVN is stable
Before every release, we create one or more "release candidates" for adventurous people to test first.
During development cycles, many people test intermediate builds. These are built every night automatically and made available to all our users. This helps finding bugs very early so they won't even get into an official release.
A big user community helps out with testing each build before we release it.
A custom crash report tool is included in every TortoiseSVN release which helps us fix the bugs much faster, even if you can't remember exactly what you did to trigger it.
Perforce Helix Core: Configuration Management Tool
Perforce Helix Core (https://www.perforce.com/products/helix-core) is the leading version control system for teams who need to accelerate innovation at scale. Store and track changes to all your digital assets, from source code to binary to IPs. Connect your teams and empower them to move faster and build better.
Helix Core gives your team the foundation to accelerate innovation. It provides one secure place to store everything, enables global teams to collaborate better, and versions in the background so you can focus on your work, not your tools.
Save Time When Every Digital Asset is in One Place
Helix Core gives the entire team, from designers to devs, quick access to the latest version of the file they need — and it versions more than just source code. Get to market faster with higher quality products when your team isn’t overwriting each other’s work.
· Access a complete history of every digital asset, not just source code. Helix Core versions video, large binary files, IPs, and more. And it lets you visualize your assets' evolution over time.
· See when files are checked out, or automatically lock them, so you don’t waste time editing binary files that can’t be merged.
· Start working right away. There is no need for team members to download entire projects to their local drives to begin working.
Get a Foundation You Will Never Outgrow
Helix Core scales endlessly while continuing to perform at lightning speed. It won’t slow down as your teams and projects grow.
· Never outgrow your version control system. Helix Core was built to handle tens of thousands of developers and creatives, tens of millions of daily transactions, and petabytes of data.
· Develop at high velocity. Your Helix Core server can handle 10,000+ concurrent commits without slowing down.
· Eliminate WAN wait. Transfer large amounts of data and enormous assets quickly to teams across the globe.
· Start creating in the cloud. Helix Core has quick and pre-configured deployment options for Microsoft Azure and AWS.
Collaborate Securely with Anyone
Stop sharing valuable IP via unsecured channels. Keep it safe while still enabling efficient collaboration within teams and with external partners.
· Set permissions all the way down to a single file and IP address.
· Allow outside contributors to access only the files they need.
· Provide your users Single Sign On when integrated with your organization's IdP.
· Review your full audit history to see what was accessed, what was changed, when, and by whom.
Version Without Reinventing Your Workflows
Keep your team focused on their deliverables. Helix Core versions your assets in the background and fits into your existing workflow and toolchain.
· Keep developing with Git, but with the power of Helix Core. Your developers are probably more familiar with Git, and they don’t need to stop using it. Helix Core and Git work well together.
· Easily integrate it with the tools your team already uses – like Unreal Engine, Jenkins, Photoshop, and Maya. Check out our vast inventory of free integrations.
· One of your tools not on this list? We offer APIs, so you can integrate and automate with any tools we don’t currently have an integration or plugin for.
Helix Core provides a single source of truth across teams. You can store code, large binary files, IP, and digital assets (including media files) in one central location. And it can handle both hardware and software assets.
IBM Rational ClearCase
IBM Rational ClearCase provides controlled access to software assets, including code, requirements, design documents, models, test plans and test results.
It features parallel development support, automated workspace management, baseline management, secure version management, reliable build auditing, and flexible access virtually anytime, anywhere.
· Scalable deployment for the enterprise: Support thousands of users at dozens of sites, managing terabytes of data.
· Flexible usage models for any development methodology: Mix and match four different views based on preferences and needs.
· Security version management and IP protection: Capture and version assets securely in a centralized repository.
· Process control and traceability for compliance: Streamline the edit-build-debug cycle and accurately reproduce versions.
· Improved time to value: Help prevent mistakes, reduce bugs and identify errors earlier.
· Flexible pricing and deployment : Take advantage of a new consumption model based on FlexPoints.
JIRA: Issue Tracking and PM Tool
Jira Software for teams
Jira Software launched in 2002 as an issue tracking and project management tool for teams. Since then, 65,000+ companies globally have adopted Jira for its flexibility to support any type of project and extensibility to work with thousands of apps and integrations.
Jira Software helps teams across financial services, retail, software, high tech, automotive, non-profit, government, life sciences, and many more verticals stay organized and efficient. (https://www.atlassian.com/software/jira/guides/getting-started/who-uses-jira#for-agile-teams)
According to Atlassian, Jira is used for issue tracking and project management. Jira is offered in four packages:
· Jira Work Management is intended as generic project management.
· Jira Software includes the base software, including agile project management features (previously a separate product: Jira Agile).
· Jira Service Management is intended for use by IT operations or business service desks.
· Jira Align is intended for strategic product and portfolio management.
Jira is written in Java and uses the Pico inversion of control container, Apache OFBiz entity engine, and WebWork 1 technology stack. For remote procedure calls (RPCs), Jira has REST, SOAP, and XML-RPC interfaces. Jira integrates with source control programs such as Clearcase, Concurrent Versions System (CVS), Git, Mercurial, Perforce,Subversion, and Team Foundation Server. It ships with various translations including English, French, German, Japanese, and Spanish.Jira implements the Networked Help Desk API for sharing customer support tickets with other issue tracking systems
JIRA is a project management software developed by the Australian company Atlassian. (https://www.simplilearn.com/tutorials/jira/what-is-jira-and-how-to-use-jira-testing-software) The word JIRA is derived from the Japanese word ‘Gojira’, meaning Godzilla. The software is based on agile methodology. If you’re wondering what is jira used for, the answer is multiple purposes – bug tracking, issue tracking, and project management. Many businesses also use JIRA software in non-standard ways as warehouse automation tool, document flow, expenses optimization, and others. The JIRA dashboard contains several useful functions and features which enable easy handling of issues. One of the most sought after agile project management solutions, JIRA has recently tweaked some of its products for all kinds of teams and organizations including IT, marketing, operations, finance, HR, legal and other departments.
Key Jira concepts
Some of the key concepts in Jira are:
1. Projects: They are used to organize and manage work within Jira. Each project contains a set of issues and can have its own custom fields, workflows, and permission schemes.
2. Issues: They are the primary unit of work in Jira. They represent tasks, bugs, and other work items that need tracking and managing. Issues can be assigned to individuals or teams with various attributes like priority, status, and due date.
3. Workflows: They define the lifecycle of an issue, including its status and transitions. Jira has a default workflow, but it can also be customized to match the needs of a specific project.
4. Boards: They are used to visualize and manage the progress of issues in a project. Jira has three types of boards: Scrum boards, Kanban boards, and Agile boards.
5. Sprints: Sprints are time-boxed periods of work in Scrum methodology. Sprints help teams to focus on a specific set of tasks and deliverables within a fixed timeframe.
6. Epics: Epics are large work items that are broken down into smaller issues. They provide a high-level view of the work that needs to be done and help teams to prioritize their work.
7. Versions: Versions are used to track and manage releases of a project. They represent a specific set of features or fixes that are ready to be shipped to users.
8. Dashboards: Dashboards provide a customizable view of project information, such as status, progress, and key metrics. Dashboards can be shared with team members or stakeholders to provide visibility into the project.
Different Uses of Jira
Originally designed as a bug and issue tracker, Jira serves as a powerful work management tool for various use cases like:
· Requirement and Test case Management – to manage manual and automated tests
· Agile Teams – JIRA software provides scrum and Kanban boards for teams practicing agile methodologies.
· Project Management – JIRA software can be configured to fit any type of project right from onset, through execution, to wrap up.
· Software Development – for developing better software, faster by incorporating Atlassian tools.
· DevOps – Atlassian open DevOps helps teams ship better software, stressing on best practices.
· Product Management – JIRA help design detailed roadmaps, handle dependencies, and share plans and progresses.
· Task Management –JIRA makes it easy to create tasks to work on, with details, due dates and reminders.
· Bug Tracking – the powerful jira workflow engine makes sure that bugs, once captured are automatically assigned and prioritized.
Mantis: Bug Tracking Tool
MantisBT (https://www.mantisbt.org/) is an open source issue tracker that provides a delicate balance between simplicity and power. Users are able to get started in minutes and start managing their projects while collaborating with their teammates and clients effectively.
Mantis Bug Tracker is a free and open source, web-based bug tracking system. The most common use of MantisBT is to track software defects. However, MantisBT is often configured by users to serve as a more generic issue tracking system and project management tool. The name Mantis and the logo of the project refer to the insect family Mantidae, known for the tracking of and feeding on other insects, colloquially referred to as "bugs". The name of the project is typically abbreviated to either MantisBT or just Mantis.
Some salient features of Mantis Bt are
· Email notifications: It sends out emails of updates, comments, resolutions to the concerned stakeholders.
· Access Control: You can control user access at a project level
· Customize: You can easily customize Mantis as per your requirements.
· Mobile Support: Mantis supports iPhone, Android, and Windows Phone Platforms.
· Plugins: An ever-expanding library of plugins to add custom functionality to Mantis Issue Tracker.
BugZila: Bug Tracking Tool
What is Bugzilla?
Bugzilla (https://www.bugzilla.org/about/) is a robust, featureful and mature defect-tracking system, or bug-tracking system. Defect-tracking systems allow teams of developers to keep track of outstanding bugs, problems, issues, enhancement and other change requests in their products effectively. Simple defect-tracking capabilities are often built into integrated source code management environments such as GitHub or other web-based or locally-installed equivalents. We find organizations turning to Bugzilla when they outgrow the capabilities of those systems - for example, because they want workflow management, or bug visibility control (security), or custom fields.
Bugzilla is both free as in freedom and free as in price. Most commercial defect-tracking software vendors charge enormous licensing fees. Despite being free, Bugzilla has many features which are lacking in both its expensive and its free counterparts. Consequently, Bugzilla is used by hundreds or thousands of organizations across the globe.
Bugzilla is a web-based system but needs to be installed on your server for you to use it. However, installation is not complex.
Bugzilla is…
· Under active development
· Used in high-volume, high-complexity environments by Mozilla and others
· Supported by a dedicated team
· Packed with features that many expensive solutions lack
· Trusted by world leaders in technology
· Installable on many operating systems, including Windows, Mac and Linux
A Brief History of Bugzilla
When mozilla.org first came online in 1998, one of the first products that was released was Bugzilla, a bug system implemented using freely available open source tools. Bugzilla was originally written in TCL by Terry Weissman for use at mozilla.org to replace the in-house system then in use at Netscape. The initial installation of Bugzilla was deployed to the public on a mozilla.org server on April 6, 1998.
After a few months of testing and fixing on a public deployment, Bugzilla was finally released as open source via anonymous CVS and available for others to use on August 26, 1998. At this point. Terry decided to port Bugzilla to Perl, with the hopes that more people would be able to contribute to it, since Perl seemed to be a more popular language. The completion of the port to Perl was announced on September 15, 1998, and committed to CVS later that night.
After a few days of bake time, this was released as Bugzilla 2.0 on September 19, 1998. Since then a large number of projects, both commercial and free have adapted it as their primary method of tracking software defects. In April of 2000, Terry handed off control of the Bugzilla project to Tara Hernandez. Under Tara’s leadership, some of the regular contributors were coerced into taking more responsibility, and Bugzilla began to truly become a group effort. In July of 2001, facing lots of distraction from her “real job,” Tara handed off control to Dave Miller, who is still in charge as of this writing.
Design Principles
Bugzilla’s development should concentrate on being a bug system. While the potential exists in the code to turn Bugzilla into a technical support ticket system, task management tool, or project management tool, we should focus on the task of designing a system to track software defects. While development occurs, we should stick to the following design principles:
· Bugzilla must run on freely available, open source tools. Bugzilla support should be widened to support commercial databases, tools, and operating systems, but not at the expense of open source ones.
· Speed and efficiency should be maintained at all costs. One of Bugzilla’s major attractions is its lightweight implementation and speed. Minimize calls into the database whenever possible, don’t generate speed sucking HTML, don’t fetch more data than you need to, etc, etc.
· ANSI SQL calls and data types must be used in all new queries and tables. Avoid database specific calls and datatypes whenever possible. Existing SQL calls and data types should be converted to ANSI SQL.
· This should be obvious, but we should be browser agnostic in our HTML and form generation, which means cleaning up the HTML output of Bugzilla, and following all applicable standards.
Bugzilla's system requirements include:
· A compatible database management system
· A suitable release of Perl 5
· An assortment of Perl modules
· A compatible web server
· A suitable mail transfer agent, or any SMTP server
Currently supported database systems are MySQL, PostgreSQL, Oracle, and SQLite. Bugzilla is usually installed on Linux using the Apache HTTP Server, but any web server that supports CGI such as Lighttpd, Hiawatha, Cherokee can be used. Bugzilla's installation process is command line driven and runs through a series of stages where system requirements and software capabilities are checked.
Workflow Management Tools at Wipro:
An Integrated Approach to Software Process Improvement at Wipro Technologies: veloci-Q
Wipro’s quality improvement journey commenced with basic process definition using frameworks like those defined by the International Standards Organization (ISO). However, rapid growth in the scale and range of Wipro’s operations increased the need for mature quality processes. The focus was on process capability, people capability, defect reduction, and productivity improvement. Wipro’s relentless pursuit of process excellence bears testimony to the fact that Wipro was one of the earliest software services companies to be assessed at Maturity Level 5 of the Carnegie Mellon Software Engineering Institute (SEISM) Capability Maturity Model (CMM), People Capability Maturity Model (P-CMM), and Capability Maturity Model Integration (CMMI) (V1.1) frameworks. Wipro was also one of the earliest software services companies to adopt Six Sigma. Wipro’s quality system has come a long way in terms of catering to the diverse challenges and growing demands of the international market. The evolution has been steady and significant—from volumes of printed manuals to a single, Web-based, integrated system available at each practitioner’s desktop. The focus on a total quality approach—through innovation in process, people, products, and services—coupled with technology enabled the evolution of veloci-Q: fast track to quality, Wipro’s holistic enterprise-wide quality system. veloci-Q embodies the best practices of each process improvement initiative and integrates multiple quality processes to deliver measurable business benefits.
Wipro has a holistic approach to quality management, with quality initiatives being driven through business-aligned measures. At Wipro, quality has always been viewed from the perspective of the customer, leading to a total quality approach that integrates people and process. Wipro’s total quality framework is optimized simultaneously along five interrelated dimensions of process, organization, culture, infrastructure, and metrics. During all of Wipro’s quality initiatives, all five dimensions have been optimized simultaneously, though incrementally. Processes are aligned with industry best practices and internationally renowned standards and frameworks like International Standards Organization (ISO) 9001, Capability Maturity Model (CMM), People Capability Maturity Model (P-CMM), and Six Sigma methodologies, amongst others. Wipro understands the spirit of such models, maps its relevance to the organization’s process improvement goals, and rigorously implements these goals to achieve its business needs. At Wipro, quality is everyone’s responsibility, with ownership and accountability prevalent at every level in the organization. The total quality approach has systems in place to ensure the capture and dissemination of invaluable knowledge. Continuous focus and investment are made through the enhancement of tools and infrastructure to support process improvement initiatives. These process improvement initiatives are ably supported by an independent quality function, with the objective of defining, maintaining, and improving quality processes. Several supporting systems supplement the overall process, creating an organizational environment for integration. Throughout Wipro’s quality journey, changing business needs and organizational goals brought about challenges that concerned merging multiple frameworks, retaining flexibility, educating and engaging personnel, keeping the overheads low, and infusing vitality, while maintaining continuity. Wipro was convinced that systematic and continuous process improvement was the answer to prevalent challenges. Wipro commenced its quality journey by establishing ISO 9001-certified basic processes, moving on to laying a foundation for process improvement with CMM Maturity Level 5. Six Sigma concepts and methodologies were integrated to ensure continuous optimization in key process areas. Doing this also brought about a focused customer-centric and data-driven paradigm to product and process quality. P-CMM Maturity Level 5 established processes to address critical people-process issues successfully and improve the maturity of workforce practices. Achieving Capability Maturity Model Integration (CMMI) Maturity Level 5 enabled the development of a broad base of processes relating to systems engineering, software engineering, and integrated product development. Wipro introduced international benchmark standards on information security when it aligned with the British Standard 7799 (BS 7799), with a focus on the confidentiality, integrity, and availability of information. veloci-Q, Wipro’s enterprise-wide quality system, integrates multiple quality processes to deliver measurable benefits to both the business and customers. veloci-Q is continuously enhanced in a structured manner with a conscious integration of people and project execution processes.
For more information please visit: https://resources.sei.cmu.edu/asset_files/technicalreport/2004_005_001_14381.pdf
Note: Its a very old report but still may be useful for the companies wanting to go for in-house CMMI / QMS tool implementation
Email ID: easycoachingtraining@gmail.com