What is it about?
The first detailed studies of electronic components reliability were undertaken to improve the performance of communications and navigational systems used by the American army. The techniques then developed were subsequently refined and applied to equipment used for many other applications where high reliability was of paramount importance - for example in civil airline electronic systems. The evolution of good and reliable products is the responsibility of technical and professional persons, engineers and designers. These individuals cannot succeed unless they are given adequate opportunity to apply their arts and mysteries so as to bring the end-product to the necessary level of satisfaction. Few managements, however, are yet aware of the far greater potential value of the reliability of their products or services. Yet customer satisfaction depends, in most cases, far more on the reliability of performance than on quality in the industrial sense. There was a time when reliable design could be prescribed simply as “picking good parts and using them right“. Nowadays the complexity of systems, particularly electronic systems, and the demand for ultrahigh reliability in many applications mean that sophisticated methods based on numerical analysis and probability techniques have been brought to bear - particularly in the early stages of design - on determining the feasibility of systems. The growing complexity of systems as well as the rapidly increasing costs incurred by loss of operation, have brought to the fore aspects of reliability of components; components and materials can have a major impact on the quality and reliability of the equipment and systems in which they are used. The required performance parameters of components are defined by the intended application. Once these requirements are established, the necessary derating is determined by taking into account the quantitative relationship between failure rate and stress factors. Component selection should not just be based only on data sheet information, because not all parameters are always specified and/or the device may not conform to some of them. When a system fails, it is not always easy to trace the reason for its failure. However, once the reason is determined, it is frequently due either to a poor- quality part or to abuse of the system, or a part (or parts) within it, or to a combination of both. Of course, failure to operate according to expectations can occur because of a design fault, even though no particular part has failed. Design is intrinsic to the reliability of a system. One way to enhance the reliability is to use parts having a history of high reliability. Conversely, classes of parts that are “failure-suspect“ - usually due to some intrinsic weakness of design materials - can be avoided. Even the best-designed components can be badly manufactured. A process can go awry, or - more likely - a step involving operator intervention can result in an occasional part that is substandard, or likely to fail under nominal stress. Hence the process of screening and/or burn-in to weed out the weak part is a universally accepted quality control tool for achieving high reliability systems. The technology has an important role regarding the reliability of the con-cerned component, because each technology has its advantages and weaknesses with respect both to performance parameters and reliability. Moreover, for integrated circuits - for example - is particularly important the selection of packaging form [inserted, surface-mounted devices; plastic quad flatpack, fine pitch; hermetic devices (ceramic, cerdip, metal can); thermal resistance, moisture problem, passivation, stress during soldering, mechanical strength], as well as the number of pins and type. Electronic component qualification tests are peremptorily required, and cover characterisation, environmental and special tests as well as reliability tests; they must be supported by intensive failure analysis to investigate relevant failure mechanisms. The science of parts failure analysis has made much progress since the recognition of reliability and quality control as a distinctive discipline. However, a new challenge is born - that of computer failure analysis, with particular emphasis on software reliability. Clearly, a computer can fail because of a hardware failure, but it can also fail because of a programming defect, though the components themselves are not defective. Testing both parts and sys-tems is an important, but costly part of producing reliable systems. Electrostatic discharge (ESD) induced failures in semiconductor devices are a major reliability concern; although improved process technology and device design have caused enhancements in the overall reliability levels achieved from any type of device family, the filler mechanisms due to ESD - especially those associated with Charge Device Model (CDM), Machine Model (MM), etc. -, are still not fully understood. Recent reliability studies of operating power semiconductor devices have demonstrated that the passage of a high energy ionising particle of cosmic or other radiation source through the semiconductor structure, may cause a defini-tive electric short-circuit between the device main terminals. In electromigration failure studies, it is generally assumed that electromigra-tion induced failures may be adequately modelled by a log-normal distribution; but several research works have proved the inefficiency of this modelling and have indicated the possible applicability of the logarithmic distribution of extreme values. The first detailed studies of electronic components reliability were undertaken to improve the performance of communications and navigational systems used by the American army. The techniques then developed were subsequently refined and applied to equipment used for many other applications where high reliability was of paramount importance - for example in civil airline electronic systems. The evolution of good and reliable products is the responsibility of technical and professional persons, engineers and designers. These individuals cannot succeed unless they are given adequate opportunity to apply their arts and mysteries so as to bring the end-product to the necessary level of satisfaction. Few managements, however, are yet aware of the far greater potential value of the reliability of their products or services. Yet customer satisfaction depends, in most cases, far more on the reliability of performance than on quality in the industrial sense. There was a time when reliable design could be prescribed simply as “picking good parts and using them right“. Nowadays the complexity of systems, particularly electronic systems, and the demand for ultrahigh reliability in many applications mean that sophisticated methods based on numerical analysis and probability techniques have been brought to bear - particularly in the early stages of design - on determining the feasibility of systems. The growing complexity of systems as well as the rapidly increasing costs incurred by loss of operation, have brought to the fore aspects of reliability of components; components and materials can have a major impact on the quality and reliability of the equipment and systems in which they are used. The required performance parameters of components are defined by the intended application. Once these requirements are established, the necessary derating is determined by taking into account the quantitative relationship between failure rate and stress factors. Component selection should not just be based only on data sheet information, because not all parameters are always specified and/or the device may not conform to some of them. When a system fails, it is not always easy to trace the reason for its failure. However, once the reason is determined, it is frequently due either to a poor- quality part or to abuse of the system, or a part (or parts) within it, or to a combination of both. Of course, failure to operate according to expectations can occur because of a design fault, even though no particular part has failed. Design is intrinsic to the reliability of a system. One way to enhance the reliability is to use parts having a history of high reliability. Conversely, classes of parts that are “failure-suspect“ - usually due to some intrinsic weakness of design materials - can be avoided. Even the best-designed components can be badly manufactured. A process can go awry, or - more likely - a step involving operator intervention can result in an occasional part that is substandard, or likely to fail under nominal stress. Hence the process of screening and/or burn-in to weed out the weak part is a universally accepted quality control tool for achieving high reliability systems. The technology has an important role regarding the reliability of the con-cerned component, because each technology has its advantages and weaknesses with respect both to performance parameters and reliability. Moreover, for integrated circuits - for example - is particularly important the selection of packaging form [inserted, surface-mounted devices; plastic quad flatpack, fine pitch; hermetic devices (ceramic, cerdip, metal can); thermal resistance, moisture problem, passivation, stress during soldering, mechanical strength], as well as the number of pins and type. Electronic component qualification tests are peremptorily required, and cover characterisation, environmental and special tests as well as reliability tests; they must be supported by intensive failure analysis to investigate relevant failure mechanisms. The science of parts failure analysis has made much progress since the recognition of reliability and quality control as a distinctive discipline. However, a new challenge is born - that of computer failure analysis, with particular emphasis on software reliability. Clearly, a computer can fail because of a hardware failure, but it can also fail because of a programming defect, though the components themselves are not defective. Testing both parts and sys-tems is an important, but costly part of producing reliable systems. Electrostatic discharge (ESD) induced failures in semiconductor devices are a major reliability concern; although improved process technology and device design have caused enhancements in the overall reliability levels achieved from any type of device family, the filler mechanisms due to ESD - especially those associated with Charge Device Model (CDM), Machine Model (MM), etc. -, are still not fully understood. Recent reliability studies of operating power semiconductor devices have demonstrated that the passage of a high energy ionising particle of cosmic or other radiation source through the semiconductor structure, may cause a defini-tive electric short-circuit between the device main terminals. In electromigration failure studies, it is generally assumed that electromigra-tion induced failures may be adequately modelled by a log-normal distribution; but several research works have proved the inefficiency of this modelling and have indicated the possible applicability of the logarithmic distribution of extreme values. The reliability problems of the electronic devices, the parameters influencing the life time and the degradation process leading to the failure have rapidly gained increasing importance. The natural enemies of electronic parts are heat, vibrations and excess voltage. Thus a logical tool in the reliability engineer’s kit is derating - designing a circuit, for example, to operate semiconductors well below their permitted junction temperatures and maximum voltage rating. Concerning the noise problem and reliability prediction of metal-insulator-metal (MIM) capacitors, generally the MIM system may be a source of partial discharges, if inhomogenities like gas bubbles are present. If the ramp voltage is applied, a number of current fluctuations occurring in the system is experimen-tally observable in many capacitors. In the time domain, the current fluctuations are present with random amplitude and random time between two consecutive pulses. Electric charge is transferred through this system and its value reaches as much as 1 pC. This charge is sufficient to make irreversible changes in the poly-ethylene-terephtalate insulating layers. The occurrence of current pulses is used as a reliability indicator. .... And the reliability problems catalogue of active and passive electronic components, integrated or not, could be continued with other various problems and aspects. Classic examples of ultrahigh reliability systems can be found both in military applications and in systems built for the NASA; certain supersystems - of which only one or very few of a kind will be built - must rely more on parts quality control, derating, and redundancy than on reliability prediction methods. Young people who are beginning their college studies will pursue their pro-fessional careers entirely in the 21st century. What skills must those engineers have? How should they be prepared to excel as engineers in the years to come? The present book - a practical guide to electronic systems manufacturing - tries to find the right actual response to some of particular reliability aspects and problems of electronic components. The authors Concerning the noise problem and reliability prediction of metal-insulator-metal (MIM) capacitors, generally the MIM system may be a source of partial discharges, if inhomogenities like gas bubbles are present. If the ramp voltage is applied, a number of current fluctuations occurring in the system is experimen-tally observable in many capacitors. In the time domain, the current fluctuations are present with random amplitude and random time between two consecutive pulses. Electric charge is transferred through this system and its value reaches as much as 1 pC. This charge is sufficient to make irreversible changes in the poly-ethylene-terephtalate insulating layers. The occurrence of current pulses is used as a reliability indicator. .... And the reliability problems catalogue of active and passive electronic components, integrated or not, could be continued with other various problems and aspects. Classic examples of ultrahigh reliability systems can be found both in military applications and in systems built for the NASA; certain supersystems - of which only one or very few of a kind will be built - must rely more on parts quality control, derating, and redundancy than on reliability prediction methods. Young people who are beginning their college studies will pursue their pro-fessional careers entirely in the 21st century. What skills must those engineers have? How should they be prepared to excel as engineers in the years to come? The present book - a practical guide to electronic systems manufacturing - tries to find the right actual response to some of particular reliability aspects and problems of electronic components. The authors
Featured Image
Why is it important?
The reliability problems of the electronic devices, the parameters influencing the life time and the degradation process leading to the failure have rapidly gained increasing importance. The natural enemies of electronic parts are heat, vibrations and excess voltage. Thus a logical tool in the reliability engineer’s kit is derating - designing a circuit, for example, to operate semiconductors well below their permitted junction temperatures and maximum voltage rating.
Read the Original
This page is a summary of: Reliability of Electronic Components, January 1999, Springer Science + Business Media,
DOI: 10.1007/978-3-642-58505-0.
You can read the full text:
Contributors
The following have contributed to this page