Program

Conference Schedule
IT-Confidence-2014_Timetable_v01
Program

Keynotes

  • T. Furuyama, Analysis of the Factors that Affect Productivity of Enterprise Software Paper (EN) Presentation (EN) Presentation (JP)

This presentation reports the analysis results of clarifying factors that affect productivity of enterprise software projects as follows. (1) Productivity is inversely proportional to the root of fifth power of the test case density and fault density respectively. (2) Project where high security or reliability level software is required has low productivity, and project where objectives and priorities are very clear, project where documentation tools are used, and project where sufficient work space is provided have high productivity. (3) Productivity of the project managed by skillful project manager is low because he/she tries to detect many faults. (4) If work conditions of a project where high security, reliability, or performance and efficiency level software is required are poor such that work space is narrow or role assignment and each person’s responsibility are not clarified, the project has remarkably low productivity.

  • D. Galorath, Why Can’t People Estimate: Estimation Bias and Strategic Mis-Estimation Paper Presentation

Many people view an estimate as a quick guess that no one believes anyhow. But producing a viable estimate is core to project success as well as ROI determination and other decision making. In decades of studying the art and science of estimating it has become apparent that: most people don’t like to and/or don’t know how to estimate; those that estimate are often always wildly optimistic, full of unintentional bias; strategic misestimating provides misleading estimates when it occurs. However, it is also obvious that viable estimates can make projects successful, outsourcing more cost effective, and help businesses make the most informed decisions.
That is why metrics and models are essential to organizations, providing the tempering with that outside view of reality that is recommended by Daniel Kahneman in his Nobel Prize winning work in estimation bias and strategic mis-estimation.


Featured Presentation
  • C. Green, Sizing for estimating, measurement and benchmarking Presentation

This presentation will talk about how sizing can be a normalising factor for both estimating, measurement and benchmarking. It will introduce the need for utilise a size measure for both functional as well as non-functional size -utilising the IFPUG method Function Point Analysis (FPA) as well as Software non-functional Assessment Process (SNAP). The presentation will then take the view of estimating to measurement for projects – to benchmarking for organisations utilising industry data as the competitive comparison. The presentation will touch on issues with requirement and how to utilise FPA and SNAP to re-cover this. Accuracy levels of size assessment for estimating. High-level view of other data then size that should be collected – but focus is on sizing as a measure – not a full measurement program.

  • T. Fehlmann & E. Kranich, Measuring Tests using COSMIC Presentation

Information and Communication Technology (ICT) is not limited to software development, mobile apps and ICT service management but percolates into all kind of products with the so-called Internet of Things.
ICT depends on software where defects are common. Developing software is knowledge acquisition, not civil engineering. Thus knowledge might be missing and consequently leading to defects and failures to perform. In turn, operating ICT products involves connecting ICT services with human interaction, and is error-prone as well. There is much value in delivering software without defects. However, up to now there exists no agreed method of measuring defects in ICT. UML sequence diagrams is a software model that describes data movements between actors and objects and allows for automated measurements using ISO/IEC 19761 COSMIC. Can we also use it for defect measurements that allows applying standard Six Sigma techniques to ICT by measuring both functional size and defect density in the same model? It allows sizing of functionality and defects even if no code is available. ISO/IEC 19761 measurements are linear, thus fitting to sprints in agile development as well as for using statistical tools from Six Sigma.

  • M. Saeki, New topics of “IPA/SEC White Paper 2014-2015 on Software Development projects in Japan” Presentation

By analyzing historical data of the software industry, it is possible to improve the software productivity and quality through benchmarking and management decisions about software development practices. Software Reliability Enhancement Center (SEC) of Information-technology Promotion Agency, Japan is continuously collecting new data of the software development project every year in cooperation with more than twenty companies, and publishing “IPA/SEC White Paper on Software Development projects in Japan” periodically.
The White Papers report the analysis of software development/maintenance projects in recent Japanese IT industry, in order to quantitatively demonstrate its technological competence concerning software productivity and quality. IPA/SEC will publish “IPA/SEC White Paper 2014-2015 on Software Development projects in Japan” and the addendum in this autumn. Their quantitative analyses are backed by 3,541 project data set.
They will contain more than 10 new analyses concerning software productivity and quality.
In this presentation, new analyses about the following topics will be shown:
(1) The relationship among function size, product size, and effort in each development phase.
(2) The productivity variation factors – Productivity (for example, development effort per function point) varies due to reliability requirement grades, number of pages of design documents per function point, and number of test cases per function point.
(3) The reliability variation factors – Reliability (for example, number of identified defects in service per function point) varies due to reliability requirement grades and maturity level of development organization (for example, quality assurance system).

  • S. Ohiwa, T. Oshino, S. Kusumoto & K. Matsumoto, Towards an Early Software Effort Estimation Based on the NESMA Method (Estimated FP) Presentation

The function point (FP) is a software size metric that is widely used in business application software development. Since FPs measure the functional requirements, the measured software size remains constant regardless of the programming language, design technology, or development skills involved. In addition, when planning development projects, FP measurement can be applied early in the development process. A number of FP methods have been proposed. The International Function Point Users Group (IFPUG) method and the COSMIC method have been widely used in software organizations.
FP is considered one of the most promising approaches in software size measurement, but nevertheless it does not prevail over all Japanese software industries. One of the reasons prohibiting the progress of introducing FPs into software organization is that function point counting needs a lot of effort. According to the IPA/SEC White Paper on Software Development Projects in Japan 2010-2011, the penetration rate of FP in Japanese software development companies is only 43.8 percent. Also, the survey on Information System User Companies by JUAS disclosed that the penetration rate of FP in Japanese information system user companies is less than 30 percent.
The NESMA provides some early function point counting methods. One of them is the estimated function point counting method (called NESMA EFP). In the EFP, a counter first determines all functions of all function types (ILF, EIF, EI, EO, EQ) in the target specifications. Then, the counter rates the complexity of every data function (ILF, EIF) as Low, every transactional function (EI, EO, EQ) as Average, and calculates the total unadjusted function point count. The counting effort is quite small in comparison with the IFPUG method, but there are not many articles that show the usefulness of the NESMA EFP based on actual software project data, especially for application of software cost prediction.
This paper aims to evaluate the validity of using the NESMA EFP as an alternative to the IFPUG FP in the early estimation of software development effort. In the evaluation, we used the software development data of 36 projects extracted from a software repository that maintains 115 data items of 512 software development projects collected by the Economic Research Association from 2008 through 2012. Common characteristics of these 36 projects are as follows:
•Software was newly developed.
•Software development includes the following five software-specific low-level processes; architectural design, detailed design, construction, integration, and qualification testing.
•Actual FP and total amount of effort are available.
•Actual functional size of each function type in all functions is available.
•The function types for each function have realistic functional sizes. For example, the average functional size of ILF of each function is from 7 to 15.
Main results of the empirical evaluation, and these contributions to software development are as follows;
(1) There is an extremely high correlation between the IFPUG FP count and the NESMA EFP count
Figure 1 is a scatter plot showing the relationship between the IFPUG FP count and the NESMA EFP count in 36 software development projects. The coefficient of determination between these two FP counts is 0.970. This result is not inconsistent with previous empirical evaluation by NESMA reported in the document “Early Function Point Counting.” In the NESMA evaluation, the upper bound of the FP count was about 3,000. On the other hand, the upper bound is about 30,000 in this evaluation. It implies that we can use the NESMA EFP more widely as an alternative to the IFPUG FP in software development projects in Japan than before. Also, the NESMA EFP may be useful for individuals and companies, who are considering whether to use the IFPUG FP in their software development projects, to evaluate feasibility of the IPFUG FP application.
(2) There is a high correlation between the NESMA EFP count and the software development effort
Figure 2 is a scatter plot showing the relationship between the NESMA EFP count and the total amount of software development effort in 36 software development projects. The coefficient of determination between these two FP counts is 0.823. It implies that we may be able to use the NESMA EFP to predict software development effort in the early stages of software development project. Early software effort estimation is one of the most important issues in software project management, so this result also encourages many individuals and companies who are considering whether to use the IFPUG FP in their software development projects. The coefficient is high enough, but we should continue further discussion and data analysis to eliminate or adjust some outliers to improve the accuracy of effort prediction by the NESMA EFP.

  • R.D. Fernández, R. De La Fuente & D. Castelo, Software Rates vs Price of Function Points: A cost analysis Presentation

Implementing productivity models helps in the understanding of Software Development Economics, which up to now is not entirely clear. Most organizations believe that the only way to achieve improvements is lowering software rates. With a background of three years of statistical data from large multinational clients, LEDAmc presented at UKSMA 2012 Conference a study showing how the relationship between software rates and cost per function point differs from what could be expected, sometimes even far from expected. The experience gained by LEDAmc through the implementation of software productivity models over the last two years brings new and updated insights to this study, which will be presented during the conference.

  • J. Ogilvie, Beyond the Statistical Average: The KISIS Principle – Keeping it Simple is Stupid Presentation

Based on the speaker’s experience negotiating and managing many outsourcing contracts using Function Points as a Key Performance Indicator, this presentation describes the pitfalls that can be experienced if one takes too simplistic a view of the meaning and use of Function Point data and suggests ways in which they may be avoided
Starting with a typical outsourcing scenario, and using ISBSG project data, techniques to improve the effectiveness of a Function Point program are demonstrated.
Particular emphasis is made on the importance of setting baselines appropriate to the environment to be measured and deciding how to determine if agreed performance targets are achieved.
The use of statistical analysis beyond just averages, to enable a more sophisticated and pragmatic interpretation of measurement data is demonstrated. The view that a little statistical analysis can actually uncover “lies and damn lies” is offered.
Finally, a template for design of a successful Function Point Program is presented.

  • P. Forselius, New Look at Project Management Triangle Presentation

Almost every Project Management book introduces the project management triangle. Almost every certified Project Manager thinks that she or he understands the relationships between the elements of triangle correctly: “The larger the scope, the more cost and time needed”. However, especially in ICT industry majority of the projects overrun both the budget and schedule, and deliver less functionality than expected. In this presentation we take another look at the project management triangle, to learn how to get more outcomes with spending less money and time.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s