标准详细信息 去购物车结算

【国外标准】 Standard Guide for Packaging Test Method Validation

本网站 发布时间: 2024-02-28
开通会员免费在线看70000余条国内标准,赠送文本下载次数,单本最低仅合13.3元!还可享标准出版进度查询、定制跟踪推送、标准查新等超多特权!   查看详情>>
标准简介标准简介

适用范围:

4.1 Addressing consensus standards with inter-laboratory studies (ILS) and methods specific to an organization. Test methods need to be validated in many cases, in order to be able to rely on the results. This has to be done at the organization performing the tests but is also performed in the development of standards in inter-laboratory studies (ILS), which are not substitutes for the validation work to be performed at the organization performing the test.4.1.1 Validations at the Testing Organization—Validations at the test performing organization include planning, executing, and analyzing the studies. Planning should include description of the scope of the test method which includes the description of the test equipment as well as the measurement range of samples it will be used for, rationales for the choice of samples, the amount of samples as well as rationales for the choice of methodology.4.1.2 Objective of ILS Studies—ILS studies (per E691-14) are not focused on the development of test methods but rather with gathering the information needed for a test method precision statement after the development stage has been successfully completed. The data obtained in the interlaboratory study may indicate however, that further effort is needed to improve the test method. Precision in this case is defined as the repeatability and reproducibility of a test method, commonly known as gage R&R. For interlaboratory studies, repeatability deals with the variation associated within one appraiser operating a single test system at one facility whereas reproducibility is concerned with variation between labs each with their own unique test system. It is important to understand that if an ILS is conducted in this manner, reproducibility between appraisers and test systems in the same lab are not assessed.4.1.3 Overview of the ILS Process—Essentially the ILS process consists of planning, executing, and analyzing studies that are meant to assess the precision of a test method. The steps required to do this from an ASTM perspective are; create a task group, identify an ILS coordinator, create the experimental design, execute the testing, analyze the results, and document the resulting precision statement in the test method. For more detail on how to conduct an ILS refer to E691-14.4.1.4 Writing Precision and Bias Statements—When writing Precision and Bias Statements for an ASTM standard, the minimum expectation is that the Standard Practice outlined in E177-14 will be followed. However, in some cases it may also be useful to present the information in a form that is more easily understood by the user of the standard. Examples can be found in 4.1.5 below.4.1.5 Alternative Approaches to Analyzing and Stating Results—Variable Data: 4.1.5.1 Capability Study: (1) A process capability greater than 2.00 indicates the total variability (part-to-part plus test method) of the test output should be very small relative to the tolerance. Mathematically,(2) Notice, σTotal in the above equation includes σPart and σTM. Therefore, two conclusions can be made:(a) The test method can discriminate at least 1/12 of the tolerance and hence the test method resolution is adequate Therefore, no additional analysis such as a Gage R&R Study is necessary.(b) The measurement is precise relative to the specification tolerance.(3) In addition, since the TMV capability study requires involvement of two or more operators utilizing one or more test systems, a high capability number will prove consistent test method performance across operators and test systems.4.1.5.2 Gage R&R Study: (1) The proposed acceptance criteria below for %SV, %R&R, and %P/T came from the industry-wide adopted requirements for measurement systems. According to Automotive Industry Action Group (AIAG) Measurement System Analysis Manual (4th edition, p. 78), a test method can be accepted if the test method variation (σTM) counts for less than 30 percent of the total variation of the study (σTotal).(2) This is equivalent to:A process capability greater than 2.00 indicates the total variability (part-to-part plus test method) of the test output should be very small relative to the tolerance. Mathematically,(3) When historical data is available to evaluate the variability of the process, we should also have:(4) For %P/T, another industry-wide accepted practice is to represent the population using the middle 99% of the normal distribution.5 And ideally, the tolerance range of the output should be wider than this proportion. For a normally distributed population, this indicates:(5) The factor 5.15 in the above equation is the two-sided 99% Z-score of a normal distribution. Therefore:(6) In practice this means that a test method with up to 6% P/T reproducibility would be effective at assessing the P/T for a given design.4.1.5.3 Power and Sample Size Study: (1) When comparing the means of two or more populations using statistical tests, excessive test method variability may obscure the real difference (“Signal”) and decrease the power of the statistical test. As a result, a large sample size may be needed to maintain an adequate power (≥ 80%) for the statistical test. When the sample size becomes too large to accept from a business perspective, one should improve the test method before running the comparative test. Therefore, an accept /reject decision on a comparative test method could be made based on its impact on the power and sample size of the comparative test (ex. 2 Sample T-test).4.2 Attribute Test Method Validation: 4.2.1 Objective of Attribute Test Method Validation—Attribute test method validation (ATMV) demonstrates that the training and tools provided to inspectors enable them to distinguish between good and bad product with a high degree of success. There are two criteria that are used to measure whether an ATMV has met this objective. The primary criterion is to demonstrate that the maximum escape rate, β, is less than or equal to its prescribed threshold of βmax. The parameter β is also known as Type II error, which is the probability of wrongly accepting a non-conforming device. The secondary criterion is to demonstrate that the maximum false alarm rate, α, is less than or equal to its prescribed threshold of αmax. The parameter α is also known as Type I error, which is the probability of wrongly rejecting a conforming device.4.2.2 Overview of the ATMV Process—This section describes how an ATMV typically works. In an attribute test method validation, a single, blind study is conducted that is comprised of both conforming and non-conforming units. The ATMV passes when the requirements of the both sampling plans are met. The first sampling plan demonstrates that the test method meets the requirements for the maximum allowable beta error (escape rate), and the second sampling plan demonstrates that the test method meets the requirements for the maximum allowable alpha error (false alarm rate). In other words, the test method is able to demonstrate that it accepts conforming units and rejects non-conforming units with high levels of effectiveness. The beta error sampling plan will consist entirely of nonconforming units. The total number of beta trials conducted by each inspector6 are pooled together, and their total number of misclassifications (nonconforming units that were accepted) need to be less than or equal to the number of failures prescribed by the beta error sampling plan. The alpha error sampling plan will consist entirely of conforming units. The total number of alpha trials conducted by each inspector are pooled together, and their total number of misclassifications (conforming units that were rejected) need to be less than or equal to the number of failures prescribed by the alpha error sampling plan.4.2.3 ATMV Examples—Attribute test methods cover a broad range of testing. Examples of these test method categories are listed in Table 1. The right half of the table consists of test methods that return qualitative responses, and the left half of the table contains test methods that provide variable measurement data.4.2.4 ATMV for Variable Measurement Data—It is a good practice to analyze variable test methods as variable measurement data whenever possible. However, there are instances where measurement data is more effectively treated as qualitative data. Example: A Sterile Barrier System (SBS) for medical devices with a required seal strength specification of 1.0-1.5 lb./in. is to be validated. A tensile tester is to be used to measure the seal strength, but it only has a resolution of 0.01 lbs. As a result, the Ppk calculations typically fail, even though there is very rarely a seal that is out of specification in production. The validation team determines that the data will need to be treated as attribute, and therefore, an ATMV will be required rather than a variable test method validation.4.2.5 Self-evident Inspections—This section illustrates the requirements of a self-evident inspection called out in the definitions above. To be considered a self-evident inspection, a defect is both discrete in nature and requires little or no training to detect. The defect cannot satisfy just one or the other requirement.4.2.5.1 The following may be considered self-evident inspections:(1) Sensor light illuminates when lubricity level on a wire is correct and otherwise does not light up when lubrication is insufficient – Since the test equipment is creating a binary output for the inspector and the instructions are simple, this qualifies as self-evident. However, note that a test method validation involving the equipment needs to be validated.(2) Component is present in the assembly – If the presence of the component is reasonably easy to detect, this qualifies as self-evident since the outcome is binary.(3) The correct component is used in the assembly – As long as the components are distinct from one another, this qualifies as self-evident since the outcome is binary.4.2.5.2 The following would generally not be considered self-evident inspections:(1) Burn or heat discoloration – Unless the component completely changes color when overheated, this inspection is going to require the inspector to detect traces of discoloration, which fails to satisfy the discrete conditions requirement.(2) Improper forming of S-bend or Z-bend – The component is placed on top of a template, and the inspector verifies that the component is entirely within the boundaries of the template. The bend can vary from perfectly shaped to completely out of the boundaries in multiple locations with every level of bend in-between. Therefore, this is not a discrete outcome.(3) No nicks on the surface of the component – A nick can vary in size from “not visible under magnification” to “not visible to the unaided eye” to “plainly visible to the unaided eye”. Therefore, this is not a discrete outcome.(4) No burrs on the surface of a component – Inspectors vary in the sensitivity of their touch due to callouses on their fingers, and burrs vary in their degree of sharpness and exposure. Therefore, this is neither a discrete condition nor an easy to train instruction.(5) Component is cracked – Cracks vary in length and severity, and inspectors vary in their ability to see visual defects. Therefore, this is neither a discrete outcome nor an easy to train instruction.4.2.6 ATMV Steps: 4.2.6.1 Step 1 – Prepare the test method documentation: (1) Make sure equipment qualifications have been completed or are at least in the validation plan to be completed prior to executing the ATMV.(2) Examples of equipment settings to be captured in the test method documentation include environmental or ambient conditions, magnification level on microscopes, lighting and feed rate on automatic inspection systems, pressure on a vacuum decay test and lighting standards in a cleanroom, which might involve taking lux readings in the room to characterize the light level.(3) Work with training personnel to create pictures of the defects. It may be beneficial to also include pictures of good product and less extreme examples of the defect, since the spectrum of examples will provide better resolution for decision making.(4) Where possible, the visual design standards should be shown at the same magnification level as will be used during inspection.(5) Make sure that the ATMV is run using the most recent visual design standards and that they are good representations of the potential defects.4.2.6.2 Step 2 – Establish acceptance criteria: (1) Identify which defects need to be included in the test.(2) Use scrap history to identify the frequency of each defect code or type. This could also be information that is simply provided by the SME.(3) Do not try to squeeze too many defects into a single inspection step. As more defects are added to an inspection process, inspectors will eventually reach a point where they are unable to check for everything, and this threshold may also show itself in the ATMV testing. Limits will vary by the type of product and test method, but for visual inspection, 15-20 defects may be the maximum number that is attainable.4.2.6.3 Step 3 – Determine the required performance level of each defect: (1) If the ATMV testing precedes completion of a risk analysis, the suggested approach is to use a worse-case outcome or high risk designation. This needs to be weighed against the increase in sample size associated with the more conservative rating.(2) Failure modes that do not have an associated risk index may be tested to whatever requirements are agreed upon by the validation team. If a component or assembly can be scrapped for a particular failure mode, good business sense is to make sure that the inspection is effective by conducting an ATMV.(3) Pin gages are an example of a variable output that is sometimes treated as attribute data due to poor resolution combined with tight specification limits. In this application, inspectors are trained prior to the testing to understand the level of friction that is acceptable versus unacceptable.(4) Incoming inspection is another example of where variable data is often treated as attribute. Treating variable measurements as pass/fail outcomes can allow for less complex measurement tools such as templates and require less training for inspectors. However, these benefits should be weighed against the additional samples that may be required and the degree of information lost. For instance, attribute data would say that samples centered between the specification limits are no different than samples just inside of the specification limits. This could result in greater downstream costs and more difficult troubleshooting for yield improvements.4.2.6.4 Step 4 – Determine acceptance criteria: (1) Refer to your company’s predefined confidence and reliability requirements; or(2) Refer to the chart example in Appendix X1.4.2.6.5 Step 5 – Create the validation plan: (1) Determine the proportion of each defect in the sample.(a) While some sort of rationale should be provided for how the defect proportions are distributed in the ATMV, there is some flexibility in choosing the proportions. Therefore, different strategies may be employed for different products and processes, for example 10 defective parts in 30 or 20 defects in 30. The cost of the samples along with the risk associated with incorrect outcomes affects decision making.(b) Scrap production data will often not be available for new products. In these instances, use historical scrap from a similar product or estimate the expected scrap proportions based on process challenges that were observed during development. Another option is to represent all of the defects evenly.4.2.6.6 Step 6 – Determine the number of inspectors and devices needed: (1) When the number of trials is large, consider employing more than three inspectors to reduce the number of unique parts required for the test. More inspectors can inspect the same parts without adding more parts to achieve additional trials and greater statistical power.(2) Inspectors are not required to all look at the same samples, although this is probably the simplest approach.(3) For semi-automated inspection systems that are sensitive to fixture placement or setup by the inspector, multiple inspectors should still be employed for the test.(4) For automated inspection systems that are completely inspector independent, only one inspector is needed. However, in order to reduce the number of unique parts needed, consider controlling other sources of variation such as various lighting conditions, temperature, humidity, inspection time, day/night shift, and part orientations.4.2.6.7 Step 7 – Prepare the Inspectors: (1) Train the inspectors prior to testing:(a) Explain the purpose and importance of ATMV to the inspectors.(b) Inspector training should be a two-way process. The validation team should seek feedback from the inspectors on the quality and clarity of visual standards, pictures and written descriptions in the inspection documentation.(1) Are there any gray areas that need clarification?(2) Would a diagram be more effective than an actual picture of the defect?(c) Review borderline samples. Consider adding pictures/diagrams of borderline samples to the visual standards. In some cases there may be a difference between functional and cosmetic defects. This may vary by method/package type.(d) Some validation teams have performed dry run testing to characterize the current effectiveness of the inspection. Note that the same samples should not be used for dry run testing and final testing if the same inspectors are involved in both tests.4.2.6.8 Step 8 – Select a representative group of inspectors as the test group: (1) There will be situations, such as site transfer, where all of the inspectors have about the same level of familiarity with the product. If this is the case, select the test group of inspectors based on other sources of variability within the inspectors, such as their production shift, skill level or years of experience with similar product inspection.(2) The inspectors selected for testing should at least have familiarity with the product, or this becomes an overly conservative test. For example, a lack of experience with the product may result in an increase in false positives.(3) Document that a varied group of inspectors were selected for testing.4.2.6.9 Step 9 – Prepare the Test Samples: (1) Collect representative units.(a) Be prepared for ATMV testing by collecting representative defect devices early and often in the development process. Borderline samples are particularly valuable to collect at this time. However, be aware that a sample that cannot even be agreed upon as good or bad by the subject matter experts is only going to cause problems in the testing. Instead, choose samples that are representative of “just passing” and “just failing” relative to the acceptance criteria.(2) Use the best judgment as to whether the man-made defect samples adequately represent defects that naturally occur during the sealing process, distribution simulation, or other manufacturing processes, for example. If a defect cannot be adequately replicated and/or the occurrence rate is too low to provide a sample for the testing, this may be a situation where the defect type can be omitted with rationale from the testing.(3) Estimate from a master plan how many defects will be necessary for testing, and try to obtain 1.5 times the estimated number of samples required for testing. This will allow for weeding out broken samples and less desirable samples.(4) Traceability of samples may not be necessary. The only requirement on samples is that they accurately depict conformance or the intended nonconformance. However, capturing traceability information may be helpful for investigational purposes if there is difficulty validating the method or if it is desirable to track outputs to specific non-conformities.(5) There should preferably be more than one SME to confirm the status of each sample in the test. Keep in mind that a trainer or production supervisor might also be SMEs on the process defect types.(6) Select a storage method appropriate for the particular sample. Potential options include tackle boxes with separate labeled compartments, plastic resealable bags and plastic vials. Refer to your standardized test method for pre-conditioning requirements.(7) Writing a secret code number on each part successfully conceals the type of defect, but it is NOT an effective means of concealing the identity of the part. In other words, if an inspector is able to remember the identification number of a sample and the defect they detected on that sample, then the test has been compromised the second time the inspector is given that sample. If each sample is viewed only once by each inspector, then placing the code number on the sample is not an issue.(8) Video testing is another option for some manual visual inspections, especially if the defect has the potential to change over time, such as a crack or foreign material.(9) If the product is extremely long/large, such as a guidewire, guide catheter, pouch, tray, container closure system (jar & lid), and the defects of interest are only in a particular segment of the product, one can choose to detach the pertinent segment from the rest of the sample. If extenuating factors such as length or delicacy is an element in making the full product challenging to inspect, then the full product should be used. Example: leak test where liquid in the package that could impact the test result.(10) Take pictures or videos of samples with defects and store in a key for future reference.4.2.6.10 Step 10 – Develop the protocol: (1) Suggested protocol sections(a) Purpose and scope.(b) Reference to the test method document being validated.(c) A list of references to other related documents, if applicable.(d) A list of the types of equipment, instruments, fixtures, etc. used for the TMV.(e) TMV study rationale, including:(1) Statistical method used for TMV;(2) Characteristics measured by the test method and the measurement range covered by the TMV;(3) Description of the test samples and the rationale;(4) Number of samples, number of operators, and number of trials;(5) Data analysis method, including any historical statistics that will be used for the data analysis (for example, the historical average for calculating %P/T with a one-sided specification limit).(f) TMV acceptance criteria.(g) Validation test procedures (for example, sample preparation, test environment setup, test order, data collection method, etc.).(h) Methods of randomization(1) There are multiple ways to randomize the order of the samples. In all cases, store the randomized order in another column, then repeat and append the second randomized list to the first stored list for each sample that is being inspected a second time by the same inspector.(2) Consider using Excel, Minitab, or an online random number generator to create the run order for the test.(3) Draw numbers from a container until the container is empty and record the order.(i) Some companies apply time limits to each sample or a total time limit for the test so that the testing is more representative of the fast-paced requirements of the production environment. If used, this should be noted in the protocol.4.2.6.11 Step 11 – Execute the protocol: (1) Be sure to comply with the pre-conditioning requirements during protocol execution.(2) Avoid situations where the inspector is hurrying to complete the testing. Estimate how long each inspector will take and plan ahead to start each test with enough time in the shift for the inspector to complete their section, or communicate that the inspector will be allowed to go for lunch or break during the test.(3) Explain to the inspector which inspection steps are being tested. Clarify whether there may be more than one defect per sample. However, note that more than one defect on a sample can create confusion during the testing.(4) If the first person fails to correctly identify the presence or absence of a defect, it is a business/team decision on whether to continue the protocol with the remaining inspectors. Completing the protocol will help characterize whether the issues are widespread, which could help avoid failing again the next time. On the other hand, aborting the ATMV right away could save considerable time for everyone.(5) It is not good practice to change the sampling plan during the test if a failure occurs.7 For instance, if the original beta error sampling plan was n=45, a=0, and a failure occurs, updating the sampling plan to an n = 76, a=1 during the test is incorrect since the sampling plan being performed is actually a double sampling plan with n1=45, a1=0, r1=2, n2=31, a1=1. This results in an LTPD = 5.8%, rather than the 5.0% LTPD in the original plan.(6) Be prepared with replacement samples in reserve if a defect sample becomes damaged.(7) Running the test concurrently with all of the test inspectors is risky, since the administrator will be responsible for keeping track of which inspector has each unlabeled sample.(8) Review misclassified samples after each inspector to determine whether the inspector might have detected a defect that the prep team missed.4.2.6.12 Step 12 – Analyze the test results: (1) Scrapping for the wrong defect code or defect type:(a) There will be instances where an inspector describes a defect with a word that wasn’t included in the protocol. The validation team needs to determine whether the word used is synonymous with any of the listed names for this particular defect. If not, then the trial fails. If the word matches the defect, then note the exception in the deviations section of the report.(2) Excluding data from calculations of performance:(a) If a defect is discovered after the test is complete, there are two suggested options. First, the inspector may be tested on a replacement part later if necessary. Alternatively, if the results of the individual trial will not alter the final result of the sampling plan, then the replacement trials can be bypassed. This rationale should be documented in the deviations section of the report.(1) As an example, consider an alpha sampling plan of n = 160, a = 13 that is designed to meet a 12% alpha error rate. After all inspectors had completed the test, it was determined that one of the conforming samples had a defect, and five of the six trials on this sample identified this defect, while one of the six called this a conforming sample. The results of the six trials need to be scratched, but do they need to be repeated? If the remaining 154 conforming trials have few enough failures to still meet the required alpha error rate of 12%, then no replacement trials are necessary. The same rationale would also apply to a defective sample in a beta error sampling plan.(2) If a vacuum decay test sample should have failed the leak test, in that case as part of the protocol the process may be to send the sample back to the company that created the defective sample for confirmation that it is indeed still defective. If found to no longer be representative of the desired defect type, then the sample would be excluded from the calculations.4.2.6.13 Step 13 – Complete the validation report: (1) When the validation test passes:(a) If the ATMV was difficult to pass or it requires special inspector training, consider adding an appraiser proficiency test to limit those who are eligible for the process inspection.(2) When the validation test fails:(a) Repeating the validation(1) There is no restriction on how many times an ATMV can fail. However, some common sense should be applied, as a high number of attempts appear to be a test-until-you-pass approach and could become an audit flag. Therefore, a good rule of thumb is to perform a dry run or feasibility assessment prior to execution to optimize appraiser training and test methodology in order to reduce the risk of failing the protocol. If an ATMV fails, members of the validation team could take the test themselves. If the validation team passes, then something isn’t being communicated clearly to the inspectors, and additional interviews are needed to identify the confusion. If the validation team also fails the ATMV, this is a strong indication that the visual inspection or attribute test method is not ready for release.(b) User Error(1) Examples of ATMV test error include:(a) Microscope set at the wrong magnification.(b) Sample traceability compromised during the ATMV due to a sample mix-up.(2) A test failure demonstrates that the variability among inspectors needs to be reduced. The key is to understand why the test failed, correct the issue and document rationale, so that subsequent tests do not appear to be a test-until-you-pass approach.(3) As much as possible, the same samples should not be used for the subsequent ATMV if the same inspectors are being tested that were in the previous ATMV.(4) Interview any inspectors who committed classification errors to understand if their errors were due to a misunderstanding of the acceptance criteria or simply a miss.(5) To improve the proficiency of defect detection/test methodology the following are some suggested best practices:(a) Define an order of inspection in the work instruction for the inspectors, such as moving from proximal end to distal end or doing inside then outside.(b) When inspecting multiple locations on a component or assembly for specific attributes, provide a visual template with ordered numbers to follow during the inspection.(c) Transfer the microscope picture to a video screen for easier viewing.(d) If there are too many defect types to look for at one inspection step, some may get missed. Move any inspections not associated with the process back upstream to the process that would have created the defect.(6) When an inspector has misunderstood the criteria, the need is to better differentiate good and nonconforming product. Here are some ideas:(a) Review the visual standard of the defect with the inspectors and trainers.(b) Determine whether a diagram might be more informative than a photo.(c) Change the magnification level on the microscope.(d) If an ATMV is failing because borderline defects are being wrongly accepted, slide the manufacturing acceptance criteria threshold to a more conservative level. This will potentially increase the alpha error rate, which typically has a higher error rate allowance anyway, but the beta error rate should decrease.(7) Consider using an attribute agreement analysis to help identify the root cause of the ATMV failure as it is a good tool to assess the agreement of nominal or ordinal ratings given by multiple appraisers. The analysis will calculate both the repeatability of each individual appraiser and the reproducibility between appraisers, similar to a variable gage R&R.4.2.6.14 Step 14 – Post-Validation Activities: (1) Test Method Changes(a) If requirements, standards, or test methods change, the impact of the other two factors needs to be assessed.(b) As an example, many attribute test methods such as visual inspection have no impact on the form, fit or function of the device being tested. Therefore, it is easy to overlook that changes to the test method criteria documented in design prints, visual design standards, visual process standards need to be closely evaluated for what impact the change might have on the performance of the device.(c) A good practice is to bring together representatives from operations and design to review the proposed change and consider potential outcomes of the change.(d) For example, changes to the initial visual inspection standards that were used during design verification builds may not identify defects prior to going through the process of distribution simulation. Stresses that were missed during this initial inspection may be exacerbated by exposure to shock, vibration, thermal cycling associated with the distribution simulation process. Thus, it’s important to understand the impact that changes to the visual standards used upstream may have on downstream inspections.(2) Augmented Test Method Validation—Sometimes a new defect is identified after the ATMV has already been validated. There are a variety of ways to validate detection of the new failure mode.(a) Option #1 – Repeat the entire validation with the addition of the new criterion.(1) Advantages: The end result is a complete, stand-alone validation that completely represents the final inspection configuration.(2) Disadvantages: This is an excessive level of work that amounts to revalidation of t

基本信息

  • 标准号:

    ASTM F3263-17

  • 标准名称:

    Standard Guide for Packaging Test Method Validation

  • 英文名称:

    Standard Guide for Packaging Test Method Validation
  • 标准状态:

    Active
  • 发布日期:

  • 实施日期:

  • 出版语种:

标准分类号

  • 标准ICS号:

  • 中标分类号:

关联标准

  • 替代以下标准:

  • 被以下标准替代:

  • 引用标准:

  • 采用标准:

出版信息

  • 页数:

    0 页
  • 字数:

  • 开本:

其他信息

  • 起草人:

  • 起草单位:

  • 归口单位:

  • 提出部门:

  • 发布部门:

  • 推荐标准
  • 国家标准计划