University of Calgary Logo
Department of Electrical and Computer Engineering
 

Software Metrics
(SENG 421)
Course Outline

()
Course Brochure
This course is also available as an industrial course of the Lecture Series on Software Systems for The Future. Please click on the above image for the 2 pages course brochure (PDF format) and contact the series editor for reservation. You can also purchase the course CDROM online. The CDROM includes the most recent version of all the slides, handouts and other deliverables.

Instructor: Dr. Behrouz Far (Associate Professor, Faculty of Engineering)
   
Program/Year: Software Engineering Program / Junior (3rd Year)
   
Department: Electrical and Computer Engineering and Computer Science
   
School Year: 2008 (Fall)
   
Timetable: Lecture: Mon., Wed., Fri., (11:00-12:00)
Lab: Tue. 15:00-17:00
   
Location: Lectures: ICT 116
Lab: ICT 215
   
Background Courses: SENG 311: Principles of Software Engineering
CPSC333, or ENEL359 (Recommended)
   
Lecture Format: Lecture (3x1 hour sessions per week) +
Lab (1x2 hours session per week)
   
Contact: e-mail: far@ucalgary.ca
Office: ICT 543 (5th floor, ICT Building)
Tel. (403) 210-5411
   
Teaching Assistansts: Note: For discussion related to the lab assignments, TA and the course instructor can be reached either in the Lab rooms ICT 215 or at their desks during the official lab hours only. For contacts other than lab hours please use email or arrange an appointment in advance.
   
Course Outline: Course Outline (PDF format)

Announcements:
  1. Sept/2/2008: The midterm exam is scheduled for: October 24, 2008 (Fri), 11:00 AM-12:00 NOON, ICT 116.
  2. Oct/23/2008: The final exam is scheduled for: December 12, 2008 (Fri), 12:00-15:00, Room TBA.

1. Course Description and Outline

This course is a step by step description of the software metrics. It includes introduction to foundations of measurement theory, models of software engineering measurement, software products metrics, software process metrics and measuring management. The course is composed of the following basic modules:

  • Measurement theory (overview of software metrics, basics of measurement theory, goal-based framework for software measurement, empirical investigation in software engineering)
  • Software product and process measurements (measuring internal product attributes: size and structure, measuring external product attributes: quality, measuring cost and effort, measuring software reliability, software test metrics, object-oriented metrics)
  • Measurement management
A workshop (project) is designed to reinforce the presented material.

2. Course Web Site

The SENG 421 course home page contains links to up-to-date course information, problem assignments, announcements, as well as lab and examination scheduling. The SENG 421 course home page is available through the B.H. Far's home page at the URL:
(http://www.enel.ucalgary.ca/People/far/Lectures/SENG421/)


3. Allocation of Marks

Criteria Total mark Other info
Midterm examination 20% About one hour.
Quizzes + Lab Reports 40% See Assignments schedule below.
Final examination 40% About 3 hours.
It is necessary to submit all Lab Reports and earn a passing grade of at least 50% on the final exam in order to pass the course as a whole.
Chapters of the recommended textbook covered in the final exam are as follows:
  1. Chapter 1, 2, 3, 4
  2. Chapter 7, 8, 9
  3. Chapter 10 (10.1 only)
  4. Chapter 11 (11.1 to 11.3 only)
  5. Chapter 12 (12.1 to 12.4 only)
  6. All handouts distributed in the class


4. Problem Assignments and Labs

Regular problem assignments will be given out of the course textbook, but are not required to be handed in for marking.

A list of projects will be posted on the course WWW page. The lab reports should be handed in for check and marking. Some of the reports may need online submission and marking using the WWW interface pages specially designed for this course. The reports are reviewed and a group discussion will be held in the lab hours.
Please drop your printed and electronic version of report in the designated dropbox in front of the lab room (SENG 421 dropbox, 2nd floor ICT building).

Assignment no. Submission Deadline Other info
1 5:00 PM, Sept 23, 2008 (Tuesday) Individual assignment
Information collection and survey
2 and 3 (combined) 5:00 PM, Oct 28, 2008 (Tuesday) Group assignment
Project: GQM analysis
Project selection form for Assignment no. 2, 3 in MS Word and PDF formats.
4 5:00 PM, Nov 18, 2008 (Tuesday) Group assignment
Project: Measuring Function Point
Project selection form for Assignment no. 4 and 5 in MS Word and PDF formats.
5 5:00 PM, Dec 2, 2008 (Tuesday) Group assignment
Project: Measuring effort using COCOMO II
Documents Assignments outline (PDF format)
Assignments (PDF format)


5. Details of The Assignments

Assignment no. 1

For the first part of the assignment, we suggest that you start with searching the keywords and topics related to software metrics in order to familiarize yourself with the terminology. Some keywords to start with are: function point, productivity, effort estimation, resource estimation, COTS evaluation, TMM (test maturity level), complexity metrics, object-oriented metrics, reverse engineering metrics, market/customer oriented metrics, product quality metrics, reliability and testing metrics, process improvement metrics, performance metrics, metric suites and tools, metrics for agile processes, security metrics.

A good starting point article is: Stan Rifkin, What makes measuring software so hard? (also in PDF form) IEEE Software, May/June 2001, Vol. 18, No. 3, pp. 41-45.

The following archive is quite useful:

Other useful links and articles are given at the bottom of this page.

For the second part of the assignment, you must come up with a crisp suggestion that the manager will base her/his decision on it. Your suggestion may be such that you cannot recommend one because ....
Using models, graphs and/or tables to back up the argument are considered better than reporting in narrative style. Some useful hints are:

  • Defining the scales for the attributes that are going to be compared, mapping the measured (given) values to the scales and discuss the properties of the scales.
  • Defining a model for the requirement assessment from both the manager and user viewpoints.
  • Issues with goal, quality and compatibility of the parameters.
  • Defining a quality model based on the attributes and trying to map the measured (given) values to the model.
  • Using Reliability Demonstration Chart to assess the acceptance/rejection of the system.


Assignment no. 2 and 3 (combined)

Before starting this assignment, the students should select their team members (3-4 members) and the topic (from the assignments document) and registering it using the project selection form. Then the students may want to start with reading relevant sections of the Goal-Driven Software Measurement -- A Guidebook (189 Pages) and follow the steps mentioned there. You may also find TA's slides (PowerPoint) useful. Also, some templates and particularly useful case studies are available from the book, McGraw-Hill's The Goal/Question/Metric Method. Many of the concepts can only be comprehended and decided through the discussion among the team members.

One of the difficulties that you may face is that each of the GQ(I)M steps (specially steps 6-10) may require details that may only be available in a real situation. It may give the students too much freedom to assume the details and proceed. You may want to discuss some of the assumptions with the instructor or TAs in the review lab sessions.

You should start with a business goal (Step 1), which is the title of the project you've selected. In Step 2 (identify what we want to know) try to ask 5-6 questions for the entities of interest (i.e., inputs and resources; internal artefacts; activities and flows; products and by-products) and then group the questions that address a common entity to convert them to a few (about 3-4) subgoals (Step 3). In order to convert the subgoals to measurement goals you need to define entities and attributes for each question listed for each subgoal. This may be a long and repetitive task. In Step 5 (formalizing measurement goals) you must define the Purpose-Perspective-Environment and Constraints trio for each object of interest. There may be 10-20 or more of the objects of interest, depending on the project. In Step 6 (Identifying Quantifiable Questions and Indicators) you may derive questions for all measurement goals but proceeding to the indicators for only one of the measurement goals will be sufficient. In Step 7 you should prepare a cross-reference checklist for the data elements to be collected and the indicators identified in step 6. In Step 8 you formalize the measures by defining scale, range and precision for all the entities and prepare the definition checklist for them. Step 9 is about analysis, diagnosis and actions. You may need to consider the actions only for new projects. Finally, in Step 10 you must fill in the measurement plan template that will serve as your final report for this project together with the output documents for each step of the GQIM process.
Note that the average expansion rate for goal to subgoals and subgoals to measurement goals is around 4-5. This means that an initial business goal may be associated with 16-20 measurement goals. Expansions more than the avearge may make the project hard to manage.

There are a few sample project reports available for the students to review and find out what a project report may include:

  1. The Tardiness Factor: A Plan for Improving Project Timelines and Product Delivery
Note that the sample projects are for review only and may be incomplete. They may not be reproduced in a current assignment.

Assignment 2-3 self assessment page (60K, MS word format). Please use it when you submit the report in electronic form.


Assignment no. 4

In this assignment you are asked to measure the function point for a typical software system. This will help you reinforce the concepts studied during the course.

The overall objective is to determine adjusted function point count for a software system. There are several steps necessary to accomplish this. The actual sequence or order of steps is not necessary.

  1. Determine type of function point count
  2. Determine the application boundary
  3. Identify and rate transactional function types to determine their contribution to the unadjusted function point count.
  4. Identify and rate data function types to determine their contribution to the unadjusted function point count.
  5. Determine the value adjustment factor (VAF)
  6. Calculate the adjusted function point count.

The unadjusted function point (UFP) count is determined in steps 3 and 4. It is not important if step 3 or step 4 is completed first. In GUI and OO type applications it is easy to begin with step 3.

The final function point count (adjusted function point count) is a combination of both unadjusted function point count (UFP) and the general system characteristics (GSC’s).

You may proceed as follows:

  1. Start with defining the requirements for your project. Note that you should proceed to the extent that the detailed requirements are sufficient to measure the function points.
  2. Read relevant sections of the Function Points Analysis Training Manual (110 Pages) (by David Longstreet, David@SoftwareMetrics.com). A copy of this document is also downloadable from course web page. The document is a step-by-step guide to measure FP. Follow the steps mentioned there. Many of the concepts can only be comprehended and decided through the discussion among the team members.

Assignment no. 5

The goal for this assignment is measuring effort for building a software system. You will do this using COCOMO II tool. You can download the tool from the COCOMO web page: COCOMO II includes three-stage series of models:
  1. Application Composition model is for the early phases. At this stage usually you only have the requirements for the system yet to be built. The Application Composition model will let you calculate the effort using object-points by estimating the number of screens and reports from the specification.
  2. Early Design model is for exploring architectural alternatives or incremental development strategies. The Early Design model is used to evaluate alternative software/system architectures and concepts of operation. An unadjusted function point count (UFC) is typically used for sizing. With the choice of programming language, this value is converted automatically to KLOC. However, it is possible to directly input the LOC or effects of adapting an already existing code. An important step here is to define the specification of the inputs, ouputs and internal files to derive the function point (UFC) for each program module. Another important task is to identify the values of 7 effort adjustment factors (EAF) for each module.
  3. Post-Architecture model is for more accurate cost and effort estimates. The Post-Architecture model can be used during the actual development and maintenance of a product. The Post-Architecture model includes a set of 17 cost drivers and a set of 5 scale factors determining the projects scaling component. The cost drivers and scale factors can be calculated using the COCOMO II tool in the same way as in the Early Design Model.
There is a sample project report available for the students to review and find out what a project report may include:
  1. Project Estimation: Online Bookshop Detailed report.
Note that the sample projects are for review only and may be incomplete. They may not be reproduced in a current assignment.


6. The Mid and Final Examinations

The midterm examination is a one hour exam and will cover the material up to the date of the examination. It covers approximately the chapters 1-4 and 7-8 of the course recommended textbook plus any other handouts. The exact contents will be announced in the class and on the course Web page. The exact time and location will be announced in the class and on the course's WWW page well in advance.

The final examination is a three hours exam and will be scheduled by the Registrar's Office at a time in the two-week period following the end of classes in April. It covers any material from the entire course. Chapters 1-4 and 7-12 of the course recommended textbook plus any other handouts will be covered. The exact contents will be announced in the class and on the course Web page.

Calculators are permitted in all examinations and quizzes. The examinations and quizzes are all closed-book and closed-notes. The students do not need to memorize the relationships or formulae. They will be given whenever required.


7. Detail Contents

Regular Sessions
1st week
Notes
(PDF format)
Overview of software metrics (3 sessions)
  • Introducing the course.
  • What is software measurement?
  • What are software metrics?
2nd week
Notes
(PDF format)
The basics of measurement (3 sessions)
  • Metrology
  • Property-oriented measurement
  • Meaningfulness in measurement
  • Measurement quality
  • Measurement process
  • Scale
  • Measurement validation
  • Object-oriented measurement
  • Subject-domain-oriented measurement
3rd week
Notes
(PDF format)
Goal-based framework for software measurement (4 sessions)
  • Software measure classification
  • Goal-based paradigms: Goal-Question-Metrics (GQM) and Goal-Question-Indicator-Metrics (GQIM)
  • Applications of GQM and GQIM
  • Case studies
4th week
Notes
(PDF format)
Empirical investigation (2 sessions)
  • Software engineering investigation
  • Investigation principles
  • Investigation techniques
  • Formal experiments: Planning
  • Formal experiments: Principles
  • Formal experiments: Types
  • Formal experiments: Selection
  • Guidelines for empirical research
5th week
Notes
(PDF format)
Measuring internal product attributes: size (3 sessions)
  • Software size
  • Software Size: Length (code, specification, design)
  • Software Size: Reuse
  • Software Size: Functionality (function point, feature point, object point, use-case point)
  • Software Size: Complexity
6th week
Notes
(PDF format)
Measuring internal product attributes: structure (3 sessions)
  • Software structural measurement
  • Control-flow structure
  • Cyclomatic complexity
  • Data flow and data structure attributes
  • Architectural measurement
Review
week
Review and midterm exam (2 sessions)
  • Midterm review session: (Wed, October 22, 2008)
  • Midterm examination (Fall 2008) (Fri, October 24, 2008)
7th week
Notes
(PDF format)
Measuring cost and effort (3 sessions)
  • Software cost model
  • COCOMO and COCOMO II
  • Constraint model
  • Software Lifecycle Management (SLIM)
  • Cost models: advantages and drawbacks
8th week
Notes
(PDF format)
Measuring external product attributes: quality (4 sessions)
  • Software quality
  • Software quality models: Boehm's model, McCall's model, ISO 9126 model, etc.
  • Basic software quality metrics
  • Quality management models
  • Measuring customer satisfaction
  • Software Quality Assurance (SQA)
9th week
Notes
(PDF format)
Measuring software reliability (3 sessions)
  • Reliability concepts and definitions
  • Software reliability models and metrics
  • Fundamentals of software reliability engineering (SRE)
  • Reliability management models
10th week
Notes
(PDF format)
Software test metrics (3 sessions)
  • Test concepts, definitions and techniques
  • Estimating number of test case
  • Allocating test times
  • Decisions based on testing
  • Test coverage measurement
  • Software testability measurement
  • Remaining defects measurement
11th week
Notes
(PDF format)
Object-oriented metrics (3 sessions)
  • Object-Oriented measurement concepts
  • Basic metrics for OO systems
  • OO analysis and design metrics
  • Metrics for productivity measurement
  • Metrics for OO software quality
  • Experience-based guidelines
Course Review Review and final examination (2 sessions)
  • Final review session: (Dec 5th, 2008)
  • Final examination (Fall 2008): December 12, 2008 (Fri), 12:00-15:00


8. Textbooks and Suggested References

Textbook:
   
Additional Recommended Text and Reference Books:
   
 
   
 
   
 
   
 
  • Software Metrics: A Guide to Planning, Analysis, and Application, C. Ravindranath Pandian, Auerbach Publications, CRC Press Company, 2004.
 
  • Software Engineer's Reference Book, J. McDermid (Edt.), Butterworth Heinemann, 1993.
    ISBN 0-7506-0813-7.


9. Other Information

Related Links: Selected Papers and Books:

  1. Albrecht, A.J.,
    Measuring application Development Productivity proc. IBM Application Development Joint SHARE/GUIDE Symposium, pp. 83-92, (1979).
  2. Albrecht, A.J. and Gaffney, J.F.,
    Software Function, Source Lines of Code and Development Effort Prediction: A Software Science Validation, IEEE Trans. Software Engineering, vol. 9, no.6, pp.639-648, (1983).
  3. The ami Handbook: A Quantitative Approach to Software Management
    London, England: The ami Consortium, South Bank Polytechnic, 1992.
  4. Armitage, J.W. and Kellner, M.I.,
    A Conceptual Schema for Process Definitions and Models, 53-165. Proceedings of the 3rd International Conference on the Software Process, Oct. 10-11, 1994. IEEE Computer Society Press, 1994.
  5. Basili, V.R. and Weiss D.,
    A Methodology for Collecting Valid Software Engineering Data, IEEE Transactions on Software Engineering, vol. 10, pp.728-738, 1984.
  6. Basili, V.R. and Rombach, H.D.,
    The TAME Project: Towards Improvement-Oriented Software Environments, IEEE Transactions on Software Engineering, vol. 14, no. 6, 758-773, 1988.
  7. Briand, L., Morasca, S., Basili, V.,
    Property-Based Software Engineering Measurement, IEEE Transactions on Software Engineering, vol. 22, no. 1, 1996.
  8. Chidamber, S.R., Darcy, D.P., Kemerer, C.F.,
    Managerial Use of Metrics for Object-Oriented Software: An Exploratory Analysis, IEEE Transactions on Software Engineering, vol. 24, no. 8, August 1998.
  9. Chidamber, S.R., Kemerer, C.F.,
    A Metrics Suite for Object Oriented Design, IEEE Transactions on Software Engineering, vol. 20, pp. 476-493, 1994.
  10. Caws, P.,
    Definition and Measurement in Physics, Measurement: Definitions and Theories, C. West Churchman and Philburn Ratoosh, ed., pp. 3-17, John Wiley and Sons, Inc., 1959.
  11. Churchman, C.W.,
    Why Measure?, Measurement: Definitions and Theories, C. West Churchman and Philburn Ratoosh, ed., pp. 83-94, John Wiley and Sons, Inc., 1959.
  12. Fenton, N.E.,
    Software Metrics: A Rigorous Approach, Chapman and Hall, 1991.
  13. Fenton, N.E. and Whitty, R.,
    Introduction, pp. 1-19. Software Quality Assurance and Measurement, A Worldwide Perspective, Norman Fenton, Robin Whitty, and Yoshinori Iizuka, ed., pp. 1-19, International Thomson Computer Press, 1995.
  14. Ghiselli, E.E.; Campbell, J.P.; and Zedeck, S.,
    Measurement Theory for the Behavioral Sciences, W. H. Freeman and Company, 1981.
  15. Humphrey, W.S.,
    Managing the Software Process, Addison-Wesley, 1989.
  16. Jones, C.,
    Applied Software Measurement, McGraw-Hill, (1996).
  17. Kan, S.H.,
    Metrics and Models in Software Quality Engineering, Addison-Wesley, 1995.
  18. Kirchner, P.,
    Measurements and Management Decisions, Measurement: Definitions and Theories, C. West Churchman and Philburn Ratoosh, ed., pp.64-89, John Wiley and Sons, Inc., 1959.
  19. McCabe, T.J.,
    A Complexity Measure, IEEE Transactions on Software Engineering, vol.2, no.4, pp. 308-320, 1976.
  20. Rombach, H.D. and Ulery, B.T.,
    Improving Software Maintenance Through Measurement, Proceedings of the IEEE, vol. 77, no. 4, 581-595, 1989.
  21. Shepperd, M. and Ince, D.,
    Derivation and Validation of Software Metrics, Clarendon Press, 1993.
  22. Stevens, S.S.,
    On the Theory of Scales of Measurement, Science, vol. 103, no. 2684, 677-680, 1946.
  23. Stevens, S.S.,
    Mathematics, Measurement, and Psychophysics, Handbook of Experimental Psychology, S. S. Stevens, ed., pp. 1-49, John Wiley and Sons, Inc., 1951.
  24. Weinberg, G.M.,
    Quality Software Management, Vol. 2: First-Order Measurement, Dorset House Publishing, 1993.
  25. Wiener, N.,
    A New Theory of Measurement: A Study in the Logic of Mathematics, Proceedings of London Mathematical Society, vol. 2, no. 19, pp.181-205, 1920.
  26. Weyuker, E.J.,
    Evaluating Software Complexity Measures, IEEE Transactions on Software Engineering, vol. 14, no. 9, 1988.
  27. Zuse, H.,
    Software Complexity: Measures and Methods, Walter de Gruyter, 1991.
Journals:

  1. IEEE Software.
  2. IEEE Transactions on Software Engineering.
  3. IEE Proceedings - Software.
  4. Transactions on Software Engineering and Methodology (TOSEM), ACM.
  5. Information and Software Technology, Elsevier Science.
  6. Annals of Software Engineering, Kluwer.
  7. Automated Software Engineering, Kluwer.
  8. Empircal Software Engineering Journal, Kluwer.
  9. Software Practice and Experience, Wiley.
  10. Journal of Software Maintenance, Wiley.
  11. International Journal of Software Engineering and Knowledge Engineering, World Scientific.

All the slides and notes can be viewed on-line using Netscape Navigator or MSIE (version 3.x or later) browsers. Copies of the slides in Portable Document Format (PDF) is available for on-line download. Please note that the course downloadable materials are provided solely for the internal use for the registered students of this course. External and industrial participants may contact the author for availability of the materials.
All Unix, PC and MAC users can download, view and print the PDF version of the documents using Adobe's Acrobat Reader.


This page was created by Dr. B.H. Far.If you may find omissions, glitches, have suggestions for improvement of the material presented here, please contact me.
  Copyright Terms. THIS DOCUMENT AND ITS ENTIRE CONTENTS ARE COPYRIGHT 2001 BY DR. B.H. FAR. COPYING, REPUBLISHING AND DISTRIBUTING THIS DOCUMENT IN WHOLE OR IN PART IS PROHIBITED BY LAW. IF YOU DESIRE TO REPUBLISH PARTS OF THIS DOCUMENT IN ELECTRONIC FORM, PLEASE CONTACT THE AUTHOR .