微信公众号随时随地查标准

QQ交流1群(已满)

QQ群标准在线咨询2

QQ交流2群

购买标准后,可去我的标准下载或阅读

4.1 This practice establishes the criteria to treat, or mark, or both WPM with permanent identification for the phytosanitary treatment, or intended service cycle, or both, repair, specification used, and other designated characteristics.4.2 The marking of the WPM shall be performed after ensuring the material complies with the applicable specification.1.1 This practice covers the development of recommended treatment, or marking practices, or both, for wood packaging materials (WPM) and aids in identifying WPM as to phytosanitary treatment, intended service cycles, repair, the specific specification used to manufacture or recycle, and other user designated characteristics.1.2 This practice identifies WPM treated, or marked, or both in accordance with industry, government, or international recognized standards.1.3 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard.1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

定价: 590元 / 折扣价: 502 加购物车

在线阅读 收 藏

1.1 This specification covers marketing, packaging, labeling, and warning requirements for adult magnet sets containing small, powerful magnets. It is aimed at minimizing the identified hazards to children and teens associated with ingesting small, powerful magnets that are intended for adults, that is, those persons 14 years of age and older.1.2 The values stated in SI units are to be regarded as standard. The values given in parentheses after SI units are provided for information only and are not considered standard.1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety, health, and environmental practices and determine the applicability of regulatory limitations prior to use.1.4 This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.

定价: 590元 / 折扣价: 502 加购物车

在线阅读 收 藏

4.1 For criticality control of nuclear fuel in dry storage and transportation, the most commonly used neutron absorber materials are borated stainless steel alloys, borated aluminum alloys, and boron carbide aluminum alloy composites. The boron used in these neutron absorber materials may be natural or enriched in the nuclide 10B. The boron is usually incorporated either as an intermetallic phase (for example, AlB2, TiB2, CrB2, etc.) in an aluminum alloy or stainless steel, or as a stable chemical compound particulate such as boron carbide (B4C), typically in an aluminum MMC or cermet.4.2 While other neutron absorbers continue to be investigated, 10B has been most widely used in these applications, and it is the only thermal neutron absorber addressed in this standard.4.3 In service, many neutron absorber materials are inaccessible and not amenable to a surveillance program. These neutron absorber materials are often expected to perform over an extended period.4.4 Qualification and acceptance procedures demonstrate that the neutron absorber material has the necessary characteristics to perform its design functions during the service lifetime.4.5 The criticality control function of neutron absorber materials in dry cask storage systems and transportation packagings is only significant in the presence of a moderator, such as during loading of fuel under water, or water ingress resulting from hypothetical accident conditions.4.6 The expected users of this standard include designers, neutron absorber material suppliers and purchasers, government agencies, consultants and utility owners. Typical use of the practice is to summarize practices which provide input for design specification, material qualification, and production acceptance. Adherence to this standard does not guarantee regulatory approval; a government regulatory authority may require different tests or additional tests, and may impose limits or restrictions on the use of a neutron absorber material.1.1 This practice provides procedures for qualification and acceptance of neutron absorber materials used to provide criticality control by absorbing thermal neutrons in systems designed for nuclear fuel storage, transportation, or both.1.2 This practice is limited to neutron absorber materials consisting of metal alloys, metal matrix composites (MMCs), and cermets, clad or unclad, containing the neutron absorber boron-10 (10B).1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety, health, and environmental practices and determine the applicability of regulatory limitations prior to use.1.4 This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.

定价: 515元 / 折扣价: 438 加购物车

在线阅读 收 藏

1.1 This terminology contains related definitions and descriptions of terms used or likely to be used in medical packaging standards that involve barrier materials. The purpose of terminology is to promote clear understanding and interpretation of the standards in which they are used.

定价: 0元 / 折扣价: 0

在线阅读 收 藏

5.1 Manufacturers of SPF insulation may need to test their products for vapor-phase emissions of volatile and semi-volatile organic compounds in order to comply with voluntary standards, purchase specifications, or other requirements.5.2 Since SPF insulation is formed by chemical reaction when combining a two-component mixture during spraying, specialized equipment and procedures are needed to reproducibly create representative samples suitable for emission testing.5.3 SPF insulation product manufacturer’s specifications and instructions must be followed carefully and detailed information regarding the spraying process must be recorded (see 7.3). Other precautions regarding handling and shipping are needed to ensure that the chemical integrity of the samples is preserved to the extent possible by practical means (see 7.5).5.4 Laboratories must prepare representative test specimens from samples of SPF insulation in a consistent manner so that emission tests can be reproduced and reliable comparisons can be made between test data for different samples.1.1 This practice describes standardized procedures for the preparation, spraying, packaging, and shipping of fresh spray polyurethane foam (SPF) insulation product samples to be tested for their emissions of volatile organic compounds (VOCs) and semi-volatile organic compounds (SVOCs). These procedures are applicable to both closed-cell and open-cell SPF insulation products. Potential chemical emissions of interest include blowing agents, solvents, aldehydes, amine catalysts, diisocyanates, and flame retardants.1.2 Typically, SPF insulation samples are prepared at one location, such as a chemical manufacturing facility or a field product installation site. The newly prepared samples are preserved in a sealed bag, placed in a secondary container, and then shipped to a laboratory for testing.1.3 The spraying of SPF insulation products is only to be performed by trained individuals using professional spraying equipment under controlled conditions. The details of the spraying equipment and spraying procedures are based on industry practice and are outside of the scope of this practice.1.4 This practice also describes procedures for the laboratory preparation of test specimens from open-cell and closed-cell SPF insulation product samples. These specimens are prepared for testing in small-scale chambers following Guide D5116 and in micro-scale chambers that are described in Test Method D8142.1.5 Procedures for VOC and SVOC emission testing, gas sample collection and chemical analysis are outside of the scope of this practice. Such procedures will need to address the potential for emissions of some SVOCs, for example, amine catalysts, flame retardant and isocyanates, to adhere to the chamber walls.1.6 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard.1.7 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety, health, and environmental practices and determine the applicability of regulatory limitations prior to use.1.8 This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.

定价: 590元 / 折扣价: 502 加购物车

在线阅读 收 藏

5.1 Harmful biological or particulate contaminants may enter the package through imperfections such as pinholes or cracks in trays.5.2 After initial instrument set-up and calibration, the operations of individual tests and test results do not need operator interpretation.5.3 Leak test results that exceed the permissible threshold setting are indicated by audible or visual signal responses, or both, or by other means.5.4 This non-destructive test method may be performed in either laboratory or production environments and may be undertaken on either a 100 % or a statistical sampling basis. This test method, in single instrument use and current implementation, may not be fast enough to work on a production packaging line, but is well suited for statistical testing as well as package developmental design work.1.1 This non-destructive test method detects pinhole leaks in trays, as small as 50 μm (0.002 in.) in diameter, or equivalently sized cracks, subject to trace gas concentration in the tray, tray design and manufacturing tolerances.1.2 The values stated in SI units are to be regarded as standard. The values given in parentheses after SI units are provided for information only and are not considered standard.1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety, health, and environmental practices and determine the applicability of regulatory limitations prior to use.1.4 This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.

定价: 515元 / 折扣价: 438 加购物车

在线阅读 收 藏

5.1 Harmful biological or particulate contaminants may enter the package through incomplete seals or imperfections such as pinholes or cracks in the trays.5.2 After initial instrument set-up and calibration, the operations of individual tests and test results do not need operator interpretation. The non-destructive nature of the test may be important when testing high value added products.5.3 Leak test results that exceed the permissible threshold setting are indicated by audible or visual signal responses, or both, or by other means.5.4 This non-destructive test method may be performed in either laboratory or production environments. This testing may be undertaken on either a 100 % or a statistical sampling basis. This test method, in single instrument use and current implementation, may not be fast enough to work on a production packaging line, but is well suited for statistical testing as well as package developmental design work.1.1 This non-destructive test method detects leaks in non-porous rigid thermoformed trays, as well as the seal between the porous lid and the tray. The test method detects channel leaks in packages as small as 100 μm (0.004 in.) diameter in the seal as well as 50 μm (0.002 in.) diameter pinholes, or equivalently sized cracks in the tray, subject to trace gas concentration in the package, package design and manufacturing tolerances.NOTE 1: This test method does not claim to challenge the porous (breathable) lidding material. Any defects that may exist in the porous portion of the package will not be detected by this test method.1.2 The values stated in SI units are to be regarded as standard. The values given in parentheses after SI units are provided for information only and are not considered standard.1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety, health, and environmental practices and determine the applicability of regulatory limitations prior to use.1.4 This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.

定价: 590元 / 折扣价: 502 加购物车

在线阅读 收 藏

5.1 Introduction of robots to the responder's cache for use in urban search and rescue missions may have an impact on the logistical planning for the response teams. Additional volume and weight shall be stored and transported to the response site. Additional preparation time shall be allotted to ready the robot for deployment. The tools that are taken to the field may need to be augmented to service the robots. Once the robot is ready for deployment, it shall be transported from the base of operations to the mission zone. Responders may have to carry the robot and its controller or may have to provide some other transportation mechanism if it is too heavy.5.2 This practice is designed to appraise the impact in terms of logistical considerations for a response organization.1.1 This practice covers the requirement that urban search and rescue robots and all necessary associated components or equipment (for example, operator control station, power sources, spare parts, sensors, manipulators, tools, and so forth) shall complement the response organization’s cache packaging and transportation systems.1.2 Shipment by ground, air, or marine should be considered.1.3 Volume, weight, shipping classification, and deployability of the robots and associated components are considered in this practice.1.3.1 The deployability is considered through the determination of:1.3.1.1 The length of time required to prepare the robot system for deployment, and1.3.1.2 The types of tools required for servicing the robot system in the field.1.3.2 Associated components or equipment include not only all the onboard sensors, tethers, and operator control station, but also any spare parts and specialized tools needed for assembly, disassembly, and field servicing.1.3.3 Associated components also include power equipment necessary for the operation of the system, such as batteries, chargers, and power converters. Gasoline, diesel, or other types of liquid fuel are not included.1.4 The packaged items shall support the operational availability of the robot during a deployment of up to ten days. There shall be no resupply within the first 72 h of deployment.1.5 No such standards currently exist except for those relevant to shipping (for example, CFR Title 49 and International Air Transport Association (IATA) documents).1.6 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only.1.7 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

定价: 590元 / 折扣价: 502 加购物车

在线阅读 收 藏

3.1 Packaging materials may be exposed to chemicals such as water, alcohol, acid, etc. during their life cycle. If it is anticipated that the packaging material will be exposed to a chemical, it is important that the ink or coating, or both, not degrade, soften, or dissolve as a result of that contact.3.2 The testing included in this practice is applicable to surface printed and coated materials designed to be resistant to a specific chemical.3.3 The chemicals to be tested should be compatible with (that is, not damage or degrade) the substrate being printed or coated, or both.3.4 There are four separate methods detailed in this practice. The methods represent increasing degrees of severity from Method A to Method D. Selection of method should be based on the type of exposure anticipated. For example, the pouring method (Method A) is typically used where incidental exposure is anticipated, such as a spill or splash of chemical on the material surface. Method B or C is typically used when chemical resistance is desired depending on the level of exposure (B) and abrasion (C) anticipated. Method D would represent continual contact between the chemical and material and would need to be chemical-proof, (for example, if the package were to be submerged in the chemical and exposed to abrasion over a period of time.)3.5 This practice does not address acceptability criteria. These need to be jointly determined by the user and producer of the product, based on the type of exposure that is anticipated.1.1 This practice describes the procedure for evaluating the ability of an ink, overprint varnish or coating to withstand chemical exposure. Typical chemicals, which may come in contact with the package, include water, alcohol, acid, etc. The specific chemical and method of choice as well as determination of measurement outcome are left to users to agree upon in joint discussion. Suggestions for ways to measure and collect information are offered in the various methods listed in this practice.1.2 The values stated in either SI units or inch-pound units are to be regarded separately as standard. The values stated in each system may not be exact equivalents; therefore, each system shall be used independently of the other. Combining values from the two systems may result in non-conformance with the standard.1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety, health, and environmental practices and determine the applicability of regulatory limitations prior to use.1.4 This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.

定价: 515元 / 折扣价: 438 加购物车

在线阅读 收 藏

4.1 This test method provides a means for measuring a thickness dimension. Accurate measurement of thickness can be critical to meeting specifications and characterizing process, product, and material performance.4.2 This test method does not address acceptability criteria. These need to be jointly determined by the user and producer of the product. Repeatability and reproducibility of measurement is shown in the Precision and Bias section. Attention should be given to the inherent variability of materials being measured as this can affect measurement outcome.1.1 This test method covers the measurement of thickness of flexible packaging materials using contact micrometers.1.2 The Precision and Bias statement for this test method was developed using both handheld and bench top micrometers with foot sizes ranging from 4.8 mm to 15.9 mm (3/16 in. to 5/8 in.).1.3 The values stated in either SI units or inch-pound units are to be regarded separately as standard. The values stated in each system may not be exact equivalents; therefore, each system shall be used independently of the other. Combining values from the two systems may result in non-conformance with the standard.1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety, health, and environmental practices and determine the applicability of regulatory limitations prior to use.1.5 This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.

定价: 515元 / 折扣价: 438 加购物车

在线阅读 收 藏

3.1 Poor adhesion of ink or coating to the base substrate can impact the readability of printed materials, affect the functionality of coated materials, or create a source of contamination. This practice provides a means for evaluating the adhesion of ink or coating to a flexible packaging material.3.2 For purposes of resolving inter-laboratory disagreements, test methods developed from this practice may be improved by defining and controlling the pressure and method of tape application, (for example, using weighted roller), and the speed and angle of tape removal.3.3 This practice does not address acceptability criteria. These need to be jointly determined by the user and producer of the product.1.1 This practice describes a means of evaluating ink or coating adhesion to flexible packaging materials. This practice is intended for use on flexible packaging materials whose surfaces are not damaged by the application and removal of tape.1.2 The values stated in either SI units or inch-pound units are to be regarded separately as standard. The values stated in each system may not be exact equivalents; therefore, each system shall be used independently of the other. Combining values from the two systems may result in non-conformance with the standard.1.3 This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.

定价: 515元 / 折扣价: 438 加购物车

在线阅读 收 藏

5.1 Leaks in blister packs may affect product quality and such defects can arise from imperfections in the packaging material or bond between the sealed surfaces.5.2 This method of leak testing is a useful tool as it allows non-destructive and non-subjective leak testing of blister packs. It allows the operator to evaluate how different packaging materials and packaging machine conditions affect the integrity of the packaging. It can also provide indication of unwanted changes in the packaging conditions.5.3 This type of testing is typically used in pharmaceutical packaging production, during stability trials and for package research and development operations because of its non-destructive nature, cleanliness, and speed.1.1 Test Packages—This test method can be applied to non-porous blister packs sealed with flexible films such as those used in pharmaceutical packaging. Such blister packs typically consist of thermoformed polymer or cold formed aluminum trays that contain a number of individual blister pockets into which tablets or capsules are placed. The trays are then sealed with a polymer, paper-backed or foil-based flexible laminate lidding material.1.2 Leaks Detected—This test method detects leaks in blister packs by measuring the deflection of the blister pack surface in response to an applied vacuum. This deflection of the blister pack surface results from the difference in pressure between the gas inside the blister pack and the applied vacuum. Air loss from within a blister pocket as a result of a leak will alter this pressure differential causing a measureable variation in blister pocket deflection. This test method requires that the blister packs are held in appropriate tooling inside a suitable test chamber.1.3 Test Results—Test results are reported qualitatively (pass/fail). Appropriate acceptance criteria for deflection, height, and collapse values are established by comparing non-leaking packs with those containing defects of a known size. Suitably sized defects in the laminate, tray material, and seal can be detected using this test method. The sensitivity of this test method depends upon a range of factors including blister pocket headspace, blister pocket size, lidding material type, lidding material thickness, lidding material tension, printing, surface texture, test conditions, and the values selected for the pass/fail acceptance criteria. The ability of the test to detect 15 µm, 50 µm, and catastrophic sized holes in four blister pack designs was demonstrated in a study.1.4 The values stated in SI units are to be regarded as standard and no other units of measurement are included in this test method.1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

定价: 590元 / 折扣价: 502 加购物车

在线阅读 收 藏
ASTM F3263-17 Standard Guide for Packaging Test Method Validation Active 发布日期 :  1970-01-01 实施日期 : 

4.1 Addressing consensus standards with inter-laboratory studies (ILS) and methods specific to an organization. Test methods need to be validated in many cases, in order to be able to rely on the results. This has to be done at the organization performing the tests but is also performed in the development of standards in inter-laboratory studies (ILS), which are not substitutes for the validation work to be performed at the organization performing the test.4.1.1 Validations at the Testing Organization—Validations at the test performing organization include planning, executing, and analyzing the studies. Planning should include description of the scope of the test method which includes the description of the test equipment as well as the measurement range of samples it will be used for, rationales for the choice of samples, the amount of samples as well as rationales for the choice of methodology.4.1.2 Objective of ILS Studies—ILS studies (per E691-14) are not focused on the development of test methods but rather with gathering the information needed for a test method precision statement after the development stage has been successfully completed. The data obtained in the interlaboratory study may indicate however, that further effort is needed to improve the test method. Precision in this case is defined as the repeatability and reproducibility of a test method, commonly known as gage R&R. For interlaboratory studies, repeatability deals with the variation associated within one appraiser operating a single test system at one facility whereas reproducibility is concerned with variation between labs each with their own unique test system. It is important to understand that if an ILS is conducted in this manner, reproducibility between appraisers and test systems in the same lab are not assessed.4.1.3 Overview of the ILS Process—Essentially the ILS process consists of planning, executing, and analyzing studies that are meant to assess the precision of a test method. The steps required to do this from an ASTM perspective are; create a task group, identify an ILS coordinator, create the experimental design, execute the testing, analyze the results, and document the resulting precision statement in the test method. For more detail on how to conduct an ILS refer to E691-14.4.1.4 Writing Precision and Bias Statements—When writing Precision and Bias Statements for an ASTM standard, the minimum expectation is that the Standard Practice outlined in E177-14 will be followed. However, in some cases it may also be useful to present the information in a form that is more easily understood by the user of the standard. Examples can be found in 4.1.5 below.4.1.5 Alternative Approaches to Analyzing and Stating Results—Variable Data: 4.1.5.1 Capability Study: (1) A process capability greater than 2.00 indicates the total variability (part-to-part plus test method) of the test output should be very small relative to the tolerance. Mathematically,(2) Notice, σTotal in the above equation includes σPart and σTM. Therefore, two conclusions can be made:(a) The test method can discriminate at least 1/12 of the tolerance and hence the test method resolution is adequate Therefore, no additional analysis such as a Gage R&R Study is necessary.(b) The measurement is precise relative to the specification tolerance.(3) In addition, since the TMV capability study requires involvement of two or more operators utilizing one or more test systems, a high capability number will prove consistent test method performance across operators and test systems.4.1.5.2 Gage R&R Study: (1) The proposed acceptance criteria below for %SV, %R&R, and %P/T came from the industry-wide adopted requirements for measurement systems. According to Automotive Industry Action Group (AIAG) Measurement System Analysis Manual (4th edition, p. 78), a test method can be accepted if the test method variation (σTM) counts for less than 30 percent of the total variation of the study (σTotal).(2) This is equivalent to:A process capability greater than 2.00 indicates the total variability (part-to-part plus test method) of the test output should be very small relative to the tolerance. Mathematically,(3) When historical data is available to evaluate the variability of the process, we should also have:(4) For %P/T, another industry-wide accepted practice is to represent the population using the middle 99% of the normal distribution.5 And ideally, the tolerance range of the output should be wider than this proportion. For a normally distributed population, this indicates:(5) The factor 5.15 in the above equation is the two-sided 99% Z-score of a normal distribution. Therefore:(6) In practice this means that a test method with up to 6% P/T reproducibility would be effective at assessing the P/T for a given design.4.1.5.3 Power and Sample Size Study: (1) When comparing the means of two or more populations using statistical tests, excessive test method variability may obscure the real difference (“Signal”) and decrease the power of the statistical test. As a result, a large sample size may be needed to maintain an adequate power (≥ 80%) for the statistical test. When the sample size becomes too large to accept from a business perspective, one should improve the test method before running the comparative test. Therefore, an accept /reject decision on a comparative test method could be made based on its impact on the power and sample size of the comparative test (ex. 2 Sample T-test).4.2 Attribute Test Method Validation: 4.2.1 Objective of Attribute Test Method Validation—Attribute test method validation (ATMV) demonstrates that the training and tools provided to inspectors enable them to distinguish between good and bad product with a high degree of success. There are two criteria that are used to measure whether an ATMV has met this objective. The primary criterion is to demonstrate that the maximum escape rate, β, is less than or equal to its prescribed threshold of βmax. The parameter β is also known as Type II error, which is the probability of wrongly accepting a non-conforming device. The secondary criterion is to demonstrate that the maximum false alarm rate, α, is less than or equal to its prescribed threshold of αmax. The parameter α is also known as Type I error, which is the probability of wrongly rejecting a conforming device.4.2.2 Overview of the ATMV Process—This section describes how an ATMV typically works. In an attribute test method validation, a single, blind study is conducted that is comprised of both conforming and non-conforming units. The ATMV passes when the requirements of the both sampling plans are met. The first sampling plan demonstrates that the test method meets the requirements for the maximum allowable beta error (escape rate), and the second sampling plan demonstrates that the test method meets the requirements for the maximum allowable alpha error (false alarm rate). In other words, the test method is able to demonstrate that it accepts conforming units and rejects non-conforming units with high levels of effectiveness. The beta error sampling plan will consist entirely of nonconforming units. The total number of beta trials conducted by each inspector6 are pooled together, and their total number of misclassifications (nonconforming units that were accepted) need to be less than or equal to the number of failures prescribed by the beta error sampling plan. The alpha error sampling plan will consist entirely of conforming units. The total number of alpha trials conducted by each inspector are pooled together, and their total number of misclassifications (conforming units that were rejected) need to be less than or equal to the number of failures prescribed by the alpha error sampling plan.4.2.3 ATMV Examples—Attribute test methods cover a broad range of testing. Examples of these test method categories are listed in Table 1. The right half of the table consists of test methods that return qualitative responses, and the left half of the table contains test methods that provide variable measurement data.4.2.4 ATMV for Variable Measurement Data—It is a good practice to analyze variable test methods as variable measurement data whenever possible. However, there are instances where measurement data is more effectively treated as qualitative data. Example: A Sterile Barrier System (SBS) for medical devices with a required seal strength specification of 1.0-1.5 lb./in. is to be validated. A tensile tester is to be used to measure the seal strength, but it only has a resolution of 0.01 lbs. As a result, the Ppk calculations typically fail, even though there is very rarely a seal that is out of specification in production. The validation team determines that the data will need to be treated as attribute, and therefore, an ATMV will be required rather than a variable test method validation.4.2.5 Self-evident Inspections—This section illustrates the requirements of a self-evident inspection called out in the definitions above. To be considered a self-evident inspection, a defect is both discrete in nature and requires little or no training to detect. The defect cannot satisfy just one or the other requirement.4.2.5.1 The following may be considered self-evident inspections:(1) Sensor light illuminates when lubricity level on a wire is correct and otherwise does not light up when lubrication is insufficient – Since the test equipment is creating a binary output for the inspector and the instructions are simple, this qualifies as self-evident. However, note that a test method validation involving the equipment needs to be validated.(2) Component is present in the assembly – If the presence of the component is reasonably easy to detect, this qualifies as self-evident since the outcome is binary.(3) The correct component is used in the assembly – As long as the components are distinct from one another, this qualifies as self-evident since the outcome is binary.4.2.5.2 The following would generally not be considered self-evident inspections:(1) Burn or heat discoloration – Unless the component completely changes color when overheated, this inspection is going to require the inspector to detect traces of discoloration, which fails to satisfy the discrete conditions requirement.(2) Improper forming of S-bend or Z-bend – The component is placed on top of a template, and the inspector verifies that the component is entirely within the boundaries of the template. The bend can vary from perfectly shaped to completely out of the boundaries in multiple locations with every level of bend in-between. Therefore, this is not a discrete outcome.(3) No nicks on the surface of the component – A nick can vary in size from “not visible under magnification” to “not visible to the unaided eye” to “plainly visible to the unaided eye”. Therefore, this is not a discrete outcome.(4) No burrs on the surface of a component – Inspectors vary in the sensitivity of their touch due to callouses on their fingers, and burrs vary in their degree of sharpness and exposure. Therefore, this is neither a discrete condition nor an easy to train instruction.(5) Component is cracked – Cracks vary in length and severity, and inspectors vary in their ability to see visual defects. Therefore, this is neither a discrete outcome nor an easy to train instruction.4.2.6 ATMV Steps: 4.2.6.1 Step 1 – Prepare the test method documentation: (1) Make sure equipment qualifications have been completed or are at least in the validation plan to be completed prior to executing the ATMV.(2) Examples of equipment settings to be captured in the test method documentation include environmental or ambient conditions, magnification level on microscopes, lighting and feed rate on automatic inspection systems, pressure on a vacuum decay test and lighting standards in a cleanroom, which might involve taking lux readings in the room to characterize the light level.(3) Work with training personnel to create pictures of the defects. It may be beneficial to also include pictures of good product and less extreme examples of the defect, since the spectrum of examples will provide better resolution for decision making.(4) Where possible, the visual design standards should be shown at the same magnification level as will be used during inspection.(5) Make sure that the ATMV is run using the most recent visual design standards and that they are good representations of the potential defects.4.2.6.2 Step 2 – Establish acceptance criteria: (1) Identify which defects need to be included in the test.(2) Use scrap history to identify the frequency of each defect code or type. This could also be information that is simply provided by the SME.(3) Do not try to squeeze too many defects into a single inspection step. As more defects are added to an inspection process, inspectors will eventually reach a point where they are unable to check for everything, and this threshold may also show itself in the ATMV testing. Limits will vary by the type of product and test method, but for visual inspection, 15-20 defects may be the maximum number that is attainable.4.2.6.3 Step 3 – Determine the required performance level of each defect: (1) If the ATMV testing precedes completion of a risk analysis, the suggested approach is to use a worse-case outcome or high risk designation. This needs to be weighed against the increase in sample size associated with the more conservative rating.(2) Failure modes that do not have an associated risk index may be tested to whatever requirements are agreed upon by the validation team. If a component or assembly can be scrapped for a particular failure mode, good business sense is to make sure that the inspection is effective by conducting an ATMV.(3) Pin gages are an example of a variable output that is sometimes treated as attribute data due to poor resolution combined with tight specification limits. In this application, inspectors are trained prior to the testing to understand the level of friction that is acceptable versus unacceptable.(4) Incoming inspection is another example of where variable data is often treated as attribute. Treating variable measurements as pass/fail outcomes can allow for less complex measurement tools such as templates and require less training for inspectors. However, these benefits should be weighed against the additional samples that may be required and the degree of information lost. For instance, attribute data would say that samples centered between the specification limits are no different than samples just inside of the specification limits. This could result in greater downstream costs and more difficult troubleshooting for yield improvements.4.2.6.4 Step 4 – Determine acceptance criteria: (1) Refer to your company’s predefined confidence and reliability requirements; or(2) Refer to the chart example in Appendix X1.4.2.6.5 Step 5 – Create the validation plan: (1) Determine the proportion of each defect in the sample.(a) While some sort of rationale should be provided for how the defect proportions are distributed in the ATMV, there is some flexibility in choosing the proportions. Therefore, different strategies may be employed for different products and processes, for example 10 defective parts in 30 or 20 defects in 30. The cost of the samples along with the risk associated with incorrect outcomes affects decision making.(b) Scrap production data will often not be available for new products. In these instances, use historical scrap from a similar product or estimate the expected scrap proportions based on process challenges that were observed during development. Another option is to represent all of the defects evenly.4.2.6.6 Step 6 – Determine the number of inspectors and devices needed: (1) When the number of trials is large, consider employing more than three inspectors to reduce the number of unique parts required for the test. More inspectors can inspect the same parts without adding more parts to achieve additional trials and greater statistical power.(2) Inspectors are not required to all look at the same samples, although this is probably the simplest approach.(3) For semi-automated inspection systems that are sensitive to fixture placement or setup by the inspector, multiple inspectors should still be employed for the test.(4) For automated inspection systems that are completely inspector independent, only one inspector is needed. However, in order to reduce the number of unique parts needed, consider controlling other sources of variation such as various lighting conditions, temperature, humidity, inspection time, day/night shift, and part orientations.4.2.6.7 Step 7 – Prepare the Inspectors: (1) Train the inspectors prior to testing:(a) Explain the purpose and importance of ATMV to the inspectors.(b) Inspector training should be a two-way process. The validation team should seek feedback from the inspectors on the quality and clarity of visual standards, pictures and written descriptions in the inspection documentation.(1) Are there any gray areas that need clarification?(2) Would a diagram be more effective than an actual picture of the defect?(c) Review borderline samples. Consider adding pictures/diagrams of borderline samples to the visual standards. In some cases there may be a difference between functional and cosmetic defects. This may vary by method/package type.(d) Some validation teams have performed dry run testing to characterize the current effectiveness of the inspection. Note that the same samples should not be used for dry run testing and final testing if the same inspectors are involved in both tests.4.2.6.8 Step 8 – Select a representative group of inspectors as the test group: (1) There will be situations, such as site transfer, where all of the inspectors have about the same level of familiarity with the product. If this is the case, select the test group of inspectors based on other sources of variability within the inspectors, such as their production shift, skill level or years of experience with similar product inspection.(2) The inspectors selected for testing should at least have familiarity with the product, or this becomes an overly conservative test. For example, a lack of experience with the product may result in an increase in false positives.(3) Document that a varied group of inspectors were selected for testing.4.2.6.9 Step 9 – Prepare the Test Samples: (1) Collect representative units.(a) Be prepared for ATMV testing by collecting representative defect devices early and often in the development process. Borderline samples are particularly valuable to collect at this time. However, be aware that a sample that cannot even be agreed upon as good or bad by the subject matter experts is only going to cause problems in the testing. Instead, choose samples that are representative of “just passing” and “just failing” relative to the acceptance criteria.(2) Use the best judgment as to whether the man-made defect samples adequately represent defects that naturally occur during the sealing process, distribution simulation, or other manufacturing processes, for example. If a defect cannot be adequately replicated and/or the occurrence rate is too low to provide a sample for the testing, this may be a situation where the defect type can be omitted with rationale from the testing.(3) Estimate from a master plan how many defects will be necessary for testing, and try to obtain 1.5 times the estimated number of samples required for testing. This will allow for weeding out broken samples and less desirable samples.(4) Traceability of samples may not be necessary. The only requirement on samples is that they accurately depict conformance or the intended nonconformance. However, capturing traceability information may be helpful for investigational purposes if there is difficulty validating the method or if it is desirable to track outputs to specific non-conformities.(5) There should preferably be more than one SME to confirm the status of each sample in the test. Keep in mind that a trainer or production supervisor might also be SMEs on the process defect types.(6) Select a storage method appropriate for the particular sample. Potential options include tackle boxes with separate labeled compartments, plastic resealable bags and plastic vials. Refer to your standardized test method for pre-conditioning requirements.(7) Writing a secret code number on each part successfully conceals the type of defect, but it is NOT an effective means of concealing the identity of the part. In other words, if an inspector is able to remember the identification number of a sample and the defect they detected on that sample, then the test has been compromised the second time the inspector is given that sample. If each sample is viewed only once by each inspector, then placing the code number on the sample is not an issue.(8) Video testing is another option for some manual visual inspections, especially if the defect has the potential to change over time, such as a crack or foreign material.(9) If the product is extremely long/large, such as a guidewire, guide catheter, pouch, tray, container closure system (jar & lid), and the defects of interest are only in a particular segment of the product, one can choose to detach the pertinent segment from the rest of the sample. If extenuating factors such as length or delicacy is an element in making the full product challenging to inspect, then the full product should be used. Example: leak test where liquid in the package that could impact the test result.(10) Take pictures or videos of samples with defects and store in a key for future reference.4.2.6.10 Step 10 – Develop the protocol: (1) Suggested protocol sections(a) Purpose and scope.(b) Reference to the test method document being validated.(c) A list of references to other related documents, if applicable.(d) A list of the types of equipment, instruments, fixtures, etc. used for the TMV.(e) TMV study rationale, including:(1) Statistical method used for TMV;(2) Characteristics measured by the test method and the measurement range covered by the TMV;(3) Description of the test samples and the rationale;(4) Number of samples, number of operators, and number of trials;(5) Data analysis method, including any historical statistics that will be used for the data analysis (for example, the historical average for calculating %P/T with a one-sided specification limit).(f) TMV acceptance criteria.(g) Validation test procedures (for example, sample preparation, test environment setup, test order, data collection method, etc.).(h) Methods of randomization(1) There are multiple ways to randomize the order of the samples. In all cases, store the randomized order in another column, then repeat and append the second randomized list to the first stored list for each sample that is being inspected a second time by the same inspector.(2) Consider using Excel, Minitab, or an online random number generator to create the run order for the test.(3) Draw numbers from a container until the container is empty and record the order.(i) Some companies apply time limits to each sample or a total time limit for the test so that the testing is more representative of the fast-paced requirements of the production environment. If used, this should be noted in the protocol.4.2.6.11 Step 11 – Execute the protocol: (1) Be sure to comply with the pre-conditioning requirements during protocol execution.(2) Avoid situations where the inspector is hurrying to complete the testing. Estimate how long each inspector will take and plan ahead to start each test with enough time in the shift for the inspector to complete their section, or communicate that the inspector will be allowed to go for lunch or break during the test.(3) Explain to the inspector which inspection steps are being tested. Clarify whether there may be more than one defect per sample. However, note that more than one defect on a sample can create confusion during the testing.(4) If the first person fails to correctly identify the presence or absence of a defect, it is a business/team decision on whether to continue the protocol with the remaining inspectors. Completing the protocol will help characterize whether the issues are widespread, which could help avoid failing again the next time. On the other hand, aborting the ATMV right away could save considerable time for everyone.(5) It is not good practice to change the sampling plan during the test if a failure occurs.7 For instance, if the original beta error sampling plan was n=45, a=0, and a failure occurs, updating the sampling plan to an n = 76, a=1 during the test is incorrect since the sampling plan being performed is actually a double sampling plan with n1=45, a1=0, r1=2, n2=31, a1=1. This results in an LTPD = 5.8%, rather than the 5.0% LTPD in the original plan.(6) Be prepared with replacement samples in reserve if a defect sample becomes damaged.(7) Running the test concurrently with all of the test inspectors is risky, since the administrator will be responsible for keeping track of which inspector has each unlabeled sample.(8) Review misclassified samples after each inspector to determine whether the inspector might have detected a defect that the prep team missed.4.2.6.12 Step 12 – Analyze the test results: (1) Scrapping for the wrong defect code or defect type:(a) There will be instances where an inspector describes a defect with a word that wasn’t included in the protocol. The validation team needs to determine whether the word used is synonymous with any of the listed names for this particular defect. If not, then the trial fails. If the word matches the defect, then note the exception in the deviations section of the report.(2) Excluding data from calculations of performance:(a) If a defect is discovered after the test is complete, there are two suggested options. First, the inspector may be tested on a replacement part later if necessary. Alternatively, if the results of the individual trial will not alter the final result of the sampling plan, then the replacement trials can be bypassed. This rationale should be documented in the deviations section of the report.(1) As an example, consider an alpha sampling plan of n = 160, a = 13 that is designed to meet a 12% alpha error rate. After all inspectors had completed the test, it was determined that one of the conforming samples had a defect, and five of the six trials on this sample identified this defect, while one of the six called this a conforming sample. The results of the six trials need to be scratched, but do they need to be repeated? If the remaining 154 conforming trials have few enough failures to still meet the required alpha error rate of 12%, then no replacement trials are necessary. The same rationale would also apply to a defective sample in a beta error sampling plan.(2) If a vacuum decay test sample should have failed the leak test, in that case as part of the protocol the process may be to send the sample back to the company that created the defective sample for confirmation that it is indeed still defective. If found to no longer be representative of the desired defect type, then the sample would be excluded from the calculations.4.2.6.13 Step 13 – Complete the validation report: (1) When the validation test passes:(a) If the ATMV was difficult to pass or it requires special inspector training, consider adding an appraiser proficiency test to limit those who are eligible for the process inspection.(2) When the validation test fails:(a) Repeating the validation(1) There is no restriction on how many times an ATMV can fail. However, some common sense should be applied, as a high number of attempts appear to be a test-until-you-pass approach and could become an audit flag. Therefore, a good rule of thumb is to perform a dry run or feasibility assessment prior to execution to optimize appraiser training and test methodology in order to reduce the risk of failing the protocol. If an ATMV fails, members of the validation team could take the test themselves. If the validation team passes, then something isn’t being communicated clearly to the inspectors, and additional interviews are needed to identify the confusion. If the validation team also fails the ATMV, this is a strong indication that the visual inspection or attribute test method is not ready for release.(b) User Error(1) Examples of ATMV test error include:(a) Microscope set at the wrong magnification.(b) Sample traceability compromised during the ATMV due to a sample mix-up.(2) A test failure demonstrates that the variability among inspectors needs to be reduced. The key is to understand why the test failed, correct the issue and document rationale, so that subsequent tests do not appear to be a test-until-you-pass approach.(3) As much as possible, the same samples should not be used for the subsequent ATMV if the same inspectors are being tested that were in the previous ATMV.(4) Interview any inspectors who committed classification errors to understand if their errors were due to a misunderstanding of the acceptance criteria or simply a miss.(5) To improve the proficiency of defect detection/test methodology the following are some suggested best practices:(a) Define an order of inspection in the work instruction for the inspectors, such as moving from proximal end to distal end or doing inside then outside.(b) When inspecting multiple locations on a component or assembly for specific attributes, provide a visual template with ordered numbers to follow during the inspection.(c) Transfer the microscope picture to a video screen for easier viewing.(d) If there are too many defect types to look for at one inspection step, some may get missed. Move any inspections not associated with the process back upstream to the process that would have created the defect.(6) When an inspector has misunderstood the criteria, the need is to better differentiate good and nonconforming product. Here are some ideas:(a) Review the visual standard of the defect with the inspectors and trainers.(b) Determine whether a diagram might be more informative than a photo.(c) Change the magnification level on the microscope.(d) If an ATMV is failing because borderline defects are being wrongly accepted, slide the manufacturing acceptance criteria threshold to a more conservative level. This will potentially increase the alpha error rate, which typically has a higher error rate allowance anyway, but the beta error rate should decrease.(7) Consider using an attribute agreement analysis to help identify the root cause of the ATMV failure as it is a good tool to assess the agreement of nominal or ordinal ratings given by multiple appraisers. The analysis will calculate both the repeatability of each individual appraiser and the reproducibility between appraisers, similar to a variable gage R&R.4.2.6.14 Step 14 – Post-Validation Activities: (1) Test Method Changes(a) If requirements, standards, or test methods change, the impact of the other two factors needs to be assessed.(b) As an example, many attribute test methods such as visual inspection have no impact on the form, fit or function of the device being tested. Therefore, it is easy to overlook that changes to the test method criteria documented in design prints, visual design standards, visual process standards need to be closely evaluated for what impact the change might have on the performance of the device.(c) A good practice is to bring together representatives from operations and design to review the proposed change and consider potential outcomes of the change.(d) For example, changes to the initial visual inspection standards that were used during design verification builds may not identify defects prior to going through the process of distribution simulation. Stresses that were missed during this initial inspection may be exacerbated by exposure to shock, vibration, thermal cycling associated with the distribution simulation process. Thus, it’s important to understand the impact that changes to the visual standards used upstream may have on downstream inspections.(2) Augmented Test Method Validation—Sometimes a new defect is identified after the ATMV has already been validated. There are a variety of ways to validate detection of the new failure mode.(a) Option #1 – Repeat the entire validation with the addition of the new criterion.(1) Advantages: The end result is a complete, stand-alone validation that completely represents the final inspection configuration.(2) Disadvantages: This is an excessive level of work that amounts to revalidation of t

定价: 646元 / 折扣价: 550 加购物车

在线阅读 收 藏
ASTM D996-23 Standard Terminology of Packaging and Distribution Environments Active 发布日期 :  1970-01-01 实施日期 : 

1.1 This terminology is a compilation of definitions of technical terms used in the packaging and distribution environments. Terms that are generally understood or adequately found in other readily available sources are not included.1.2 A definition is a single sentence with additional information included in discussions.1.3 Definitions that are identical to those published by another standards organization or ASTM committee are identified with the name of the organization or ASTM committee.1.4 The definitions in this terminology are grouped into related areas under principal concepts. The broad descriptor term for each group is followed in alphabetical order by narrower terms and related terms. Cross-references are included where the concept group is not obvious.1.5 Terminology related to flexible barrier packaging is found in Terminology F17.1.6 This international standard was developed in accordance with internationally recognized principles on standardization established in the Decision on Principles for the Development of International Standards, Guides and Recommendations issued by the World Trade Organization Technical Barriers to Trade (TBT) Committee.

定价: 646元 / 折扣价: 550 加购物车

在线阅读 收 藏

4.1 This standard provides guidance in determining the most appropriate procedures for packaging and shipping environmental samples. Use of this guide by personnel involved in packaging and shipping environmental samples will facilitate safe, effective and compliant procedures.4.2 Due to the changing nature of regulations and other information, users are advised to thoroughly research requirements related to packaging and shipping prior to initiating a sampling event that will require shipment of the samples.1.1 This standard provides guidance on the selection of procedures for proper packaging and shipment of environmental samples to the laboratory for analysis to ensure compliance with appropriate regulatory programs and protection of sample integrity during shipment.1.2 This standard does not address transport of hazardous wastes for disposal purposes.1.3 This standard does not address the selection of parameter-specific sample bottles or containers.1.4 This guide offers an organized collection of information or a series of options and does not recommend a specific course of action. This guide cannot replace education or experience and should be used in conjunction with professional judgment. Not all aspects of this guide may be applicable in all circumstances. This guide is not intended to represent or replace the standard of care by which the adequacy of a given professional service must be judged, nor should this guide be applied without consideration of the many unique aspects of a project. The word “standard” in the title of this guide means only that the guide has been approved through the ASTM consensus process.1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory requirements prior to use.

定价: 590元 / 折扣价: 502 加购物车

在线阅读 收 藏
60 条记录,每页 15 条,当前第 2 / 4 页 第一页 | 上一页 | 下一页 | 最末页  |     转到第   页