Guide me to perform DnCNN noise reduction modeling for noise reduction in images , you can consult me for specific requirements, specifically for deep learning modeling alteration

1

M.SC CAPSTONE PROJECT – FINAL REPORT – 20XX

Student's Name supervised by Supervisor’s Name

Title of Your MSc Capstone Group Project

Abstract— The final report is the most important output of your MSc project. The submitted report must conform to this template and should NOT be longer than 10 A4 sized pages (including diagrams and references). Note that there is a penalty for over-length reports. Reports which are over the 10-page limit will be subject to a 10% reduction in the report component for each page over the limits. As a recommendation, you can aim for the range of 4000 – 6000 words with 5-10 diagrams but this could be different depending on the nature of your project.

Index Terms— Write about four keywords or important phrases related to your project in alphabetical order, separated by commas.

introduction

T

HIS section contains the motivation of research as well as literature review. It clarifies the scope and the motivation of the work. This starts with a general overview of your topic, and it should clearly discuss the importance of your project topic and why it matters.

A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your project.

A good literature review does not just summarize sources—it analyses, synthesises, and critically evaluates to give a clear picture of the state of knowledge on the subject. You should not just paraphrase other researchers—add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole. It is important to write well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts. Sub-section titles

If necessary, a subsection can be used for to create a hierarchy of information.

Equations

Number equations consecutively with equation numbers in parentheses flush with the right margin, as in (1). First use the equation editor to create the equation. Then select the “Equation” markup style. Press the tab key and write the equation number in parentheses. To make your equations more compact, you may use the solidus ( / ), the exp function, or appropriate exponents. Use parentheses to avoid ambiguities in denominators. Punctuate equations when they are part of a sentence, as in

(1)

Be sure that the symbols in your equation have been defined before the equation appears or immediately following. Italicize symbols ( T might refer to temperature, but T is the unit tesla). Refer to “(1),” not “Eq. (1)” or “equation (1),” except at the beginning of a sentence: “Equation (1) is … .”

Units

Use SI (MKS) units. SI units are strongly encouraged and you should have an astoundingly good reason not to use them.

Figure and tables

All figures and tables should be referred to in the text as Figure 1 and Table 1. They should be one column width wide. The general rule is to place them as near as possible after the first reference to them and if possible to the top or the bottom of a column. In special cases, you can include a large diagram that spans over the two columns if it is needed.

Figure axis labels are often a source of confusion. Use words rather than symbols. As an example, write the quantity “Magnetization,” or “Magnetization M,” not just “ M.” Put units in parentheses. Do not label axes only with units. As in Fig. 1, for example, write “Magnetization (A/m)” or “Magnetization (Am1),” not just “A/m.” Do not label axes with a ratio of quantities and units. For example, write “Temperature (K),” not “Temperature/K.”

Table sample

Table Head

Table Column Head

Table column subhead

Subhead

Subhead

copy

More table copy

Chart, scatter chart  Description automatically generated

Magnetization as a function of applied field. There is a period after the figure number, followed by two spaces. It is good practice to explain the significance of the figure in the caption.

Multipliers can be especially confusing. Write “Magnetization (kA/m)” or “Magnetization (103 A/m).” Do not write “Magnetization (A/m) 1000” because the reader would not know whether the top axis label in Fig. 1 meant 16000 A/m or 0.016 A/m. Figure labels should be legible, approximately 8 to 12 point type.

References

Number citations consecutively in square brackets [1]. The sentence punctuation follows the brackets [2]. Multiple references [2], [3] are each numbered with separate brackets [1]–[3]. When citing a section in a book, please give the relevant page numbers [2]. In sentences, refer simply to the reference number, as in [3]. Do not use “Ref. [3]” or “reference [3]” except at the beginning of a sentence: “Reference [3] shows … .” Please do not use automatic endnotes in Word, rather, type the reference list at the end of the paper using the “References” style.

Please note that the references at the end of this document are in the preferred referencing style. Give all authors’ names; do not use “ et al.” unless there are six authors or more. Use a space after authors’ initials. Papers that have not been published should be cited as “unpublished” [4]. Papers that have been accepted for publication, but not yet specified for an issue should be cited as “to be published” [5]. Papers that have been submitted for publication should be cited as “submitted for publication” [6]. Please give affiliations and addresses for private communications [7].

Capitalize only the first word in a paper title, except for proper nouns and element symbols. For papers published in translation journals, please give the English citation first, followed by the original foreign-language citation [8].

Abbreviations and Acronyms

Define abbreviations and acronyms the first time they are used in the text, even after they have already been defined in the abstract. Abbreviations such as IEEE, SI, ac, and dc do not have to be defined. Abbreviations that incorporate periods should not have spaces: write “C.N.R.S.,” not “C. N. R. S.” Do not use abbreviations in the title unless they are unavoidable (for example, “IEEE” in the title of this article).

Other Recommendations

Use one space after periods and colons. Hyphenate complex modifiers: “zero-field-cooled magnetization.” Avoid dangling participles, such as, “Using (1), the potential was calculated.” [It is not clear who or what used (1).] Write instead, “The potential was calculated by using (1),” or “Using (1), we calculated the potential.”

Use a zero before decimal points: “0.25,” not “.25.” Use “cm3,” not “cc.” Indicate sample dimensions as “0.1 cm 0.2 cm,” not “0.1 0.2 cm2.” The abbreviation for “seconds” is “s,” not “sec.” Do not mix complete spellings and abbreviations of units: use “Wb/m2” or “webers per square meter,” not “webers/m2.” When expressing a range of values, write “7 to 9” or “7-9,” not “7~9.”

A parenthetical statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) Avoid contractions; for example, write “do not” instead of “don’t.” The serial comma is preferred: “A, B, and C” instead of “A, B and C.”

An excellent style manual and source of information for science writers is [9]. A general IEEE style guide and an Information for Authors are both available at http://www.ieee.org/web/publications/authors/transjnl/index.html

methodology

A This section should outline the methods used in your investigation. Examples include, experimental methods, method of calculation/ mathematical technique, description of software algorithm developed, how hardware was configured. If there are several smaller investigations, then the aims should be clearly stated and you may wish to devote a combined section of the paper to each method, results and discussion.

RESULTS AND DISCUSSIONS

This is an important section of the report as this is where you present, describe and analyse your results. When writing this section ask yourself: Do my findings address the aim of the paper? How do my conclusions compare with other research in the peer reviewed literature? Are my arguments logical? Do I have enough results to make claims or is there only a slight suggestion in the data? Do I have conflicting results? Why? What other information would I need to support my argument?

CONCLUSION

A conclusion section is required. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions.

ACKNOWLEDGMENT

The authors would like to thank … This is optional.

References

[1] G. O. Young, “Synthetic structure of industrial plastics (Book style with paper title and editor),” in Plastics, 2nd ed. vol. 3, J. Peters, Ed. New York: McGraw-Hill, 1964, pp. 15–64.

[2] W.-K. Chen, Linear Networks and Systems (Book style) . Belmont, CA: Wadsworth, 1993, pp. 123–135.

[3] H. Poor, An Introduction to Signal Detection and Estimation. New York: Springer-Verlag, 1985, ch. 4.

[4] B. Smith, “An approach to graphs of linear forms (Unpublished work style),” unpublished.

[5] E. H. Miller, “A note on reflector arrays (Periodical style—Accepted for publication),” IEEE Trans. Antennas Propagat., to be published.

[6] J. Wang, “Fundamentals of erbium-doped fiber amplifiers arrays (Periodical style—Submitted for publication),” IEEE J. Quantum Electron., submitted for publication.

[7] C. J. Kaufman, Rocky Mountain Research Lab., Boulder, CO, private communication, May 1995.

[8] Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, “Electron spectroscopy studies on magneto-optical media and plastic substrate interfaces (Translation Journals style),” IEEE Transl. J. Magn.Jpn., vol. 2, Aug. 1987, pp. 740–741 [ Dig. 9th Annu. Co

,

__pycache__/Dncnn.cpython-38.pyc

cut.py

# import os # import SimpleITK as sitk # import numpy as np from PIL import Image # # # def slice_and_save_images(input_dir, output_dir, slice_size=64): # """ # 读取mhd+raw格式的医学图像,将其切片并保存为图片格式 # # 参数: # input_dir (str): 输入图像的目录路径 # output_dir (str): 输出切片图像的目录路径 # slice_size (int): 切片的大小 (默认为64×64) # """ # # 创建输出目录 # if not os.path.exists(output_dir): # os.makedirs(output_dir) # # # 遍历输入目录中的所有文件 # for filename in os.listdir(input_dir): # if filename.endswith('.mhd'): # # 读取mhd+raw格式的图像 # image_path = os.path.join(input_dir, filename) # image = sitk.ReadImage(image_path) # image_array = sitk.GetArrayFromImage(image) # # # 获取图像的维度 # depth, height, width = image_array.shape # # # 将图像切片并保存为图片格式 # for z in range(0, depth, slice_size): # for y in range(0, height, slice_size): # for x in range(0, width, slice_size): # slice_image = image_array[z:z + slice_size, y:y + slice_size, x:x + slice_size] # # # 将切片图像的数据类型转换为 8 位整型 # normalized_slice = (slice_image – slice_image.min()) / ( # slice_image.max() – slice_image.min()) * 255 # normalized_slice = normalized_slice.astype(np.uint8) # # slice_path = os.path.join(output_dir, f"slice_{z}_{y}_{x}.jpg") # sitk.WriteImage(sitk.GetImageFromArray(slice_image), slice_path) # print(f"Saved slice: {slice_path}") # # # # 示例用法 input_dir = r'C:UsersAdministratorDesktopdataTEST001' output_dir = r'./' # slice_and_save_images(input_dir, output_dir, slice_size=64) import os from tqdm import tqdm import SimpleITK as sitk import matplotlib.pyplot as plt import cv2 import shutil def slice(ori_path: str, pro_path: str): id = 0 for path in os.listdir(ori_path): if path.find('mhd') >= 0: id += 1 save_content = os.path.join(pro_path, str(id)) if os.path.exists(save_content): shutil.rmtree(save_content) os.makedirs(save_content) data_mhd = sitk.ReadImage(os.path.join(ori_path, path)) spacing = data_mhd.GetSpacing() scan = sitk.GetArrayFromImage(data_mhd) for i in tqdm(range(len(scan))): img = cv2.normalize(scan[i], None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8U) img = Image.fromarray(img) save_path = os.path.join(save_content, f'{id}_{i}.png') img.save(save_path) slice(input_dir, output_dir)

Dncnn.py

import torch import torch.nn as nn import torch.nn.functional as F class DnCNN(nn.Module): def __init__(self, depth=17, n_channels=64, image_channels=1): super(DnCNN, self).__init__() kernel_size = 3 padding = 1 features = n_channels layers = [] layers.append(nn.Conv2d(in_channels=image_channels, out_channels=features, kernel_size=kernel_size, padding=padding, bias=True)) layers.append(nn.ReLU(inplace=True)) for _ in range(depth-2): layers.append(nn.Conv2d(in_channels=features, out_channels=features, kernel_size=kernel_size, padding=padding, bias=False)) layers.append(nn.BatchNorm2d(features)) layers.append(nn.ReLU(inplace=True)) layers.append(nn.Conv2d(in_channels=features, out_channels=image_channels, kernel_size=kernel_size, padding=padding, bias=False)) self.dncnn = nn.Sequential(*layers) def forward(self, x): out = self.dncnn(x) return out # Example usage if __name__ == "__main__": # Create a DnCNN model model = DnCNN(depth=17, n_channels=64, image_channels=1) # Load pre-trained weights (if available) # model.load_state_dict(torch.load('path/to/pretrained_model.pth')) # Move the model to the GPU (if available) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model.to(device) # Prepare the input image input_image = torch.randn(1, 1, 256, 256).to(device) # Adjust the size as needed # Forward pass output = model(input_image) # Print the output shape print(output.shape)

generate_list.py

import os input_path = r"F:earnEDCNN-masterEDCNN-masterdemo1/" gt_path = r"F:earnEDCNN-masterEDCNN-masterdemo2/" list_path = "F:earnPMRID-Pytorch-mainPMRID-Pytorch-main/list" f = open(list_path, 'w') for root, tmp, files in os.walk(input_path, topdown=False): for name in files: name1='noisy_' + name if os.path.exists(gt_path+name1): input_file = input_path+name gt_file = gt_path+name1 print(input_file + " " + gt_file) f.writelines(input_file + " " + gt_file + " – – -" + "n")

list

F:earnEDCNN-masterEDCNN-masterdemo1/1_0.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_0.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_1.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_1.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_10.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_10.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_100.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_100.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_101.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_101.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_102.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_102.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_103.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_103.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_104.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_104.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_105.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_105.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_106.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_106.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_107.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_107.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_108.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_108.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_109.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_109.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_11.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_11.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_110.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_110.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_111.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_111.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_112.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_112.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_113.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_113.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_114.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_114.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_115.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_115.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_116.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_116.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_117.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_117.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_118.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_118.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_119.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_119.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_12.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_12.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_120.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_120.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_121.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_121.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_122.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_122.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_123.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_123.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_124.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_124.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_125.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_125.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_126.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_126.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_127.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_127.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_13.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_13.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_14.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_14.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_15.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_15.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_16.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_16.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_17.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_17.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_18.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_18.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_19.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_19.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_2.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_2.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_20.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_20.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_21.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_21.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_22.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_22.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_23.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_23.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_24.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_24.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_25.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_25.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_26.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_26.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_27.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_27.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_28.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_28.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_29.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_29.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_3.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_3.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_30.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_30.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_31.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_31.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_32.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_32.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_33.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_33.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_34.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_34.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_35.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_35.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_36.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_36.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_37.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_37.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_38.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_38.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_39.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_39.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_4.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_4.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_40.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_40.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_41.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_41.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_42.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_42.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_43.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_43.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_44.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_44.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_45.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_45.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_46.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_46.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_47.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_47.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_48.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_48.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_49.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_49.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_5.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_5.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_50.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_50.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_51.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_51.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_52.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_52.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_53.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_53.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_54.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_54.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_55.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_55.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_56.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_56.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_57.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_57.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_58.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_58.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_59.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_59.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_6.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_6.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_60.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_60.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_61.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_61.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_62.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_62.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_63.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_63.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_64.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_64.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_65.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_65.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_66.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_66.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_67.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_67.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_68.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_68.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_69.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_69.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_7.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_7.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_70.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_70.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_71.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_71.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_72.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_72.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_73.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_73.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_74.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_74.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_75.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_75.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_76.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_76.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_77.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_77.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_78.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_78.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_79.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_79.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_8.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_8.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_80.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_80.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_81.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_81.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_82.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_82.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_83.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_83.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_84.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_84.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_85.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_85.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_86.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_86.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_87.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_87.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_88.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_88.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_89.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_89.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_9.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_9.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_90.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_90.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_91.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_91.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_92.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_92.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_93.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_93.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_94.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_94.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_95.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_95.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_96.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_96.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_97.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_97.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_98.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_98.png – – – F:earnEDCNN-masterEDCNN-masterdemo1/1_99.png F:earnEDCNN-masterEDCNN-masterdemo2/noisy_1_99.png – – –

logs/pmrid_l2/events.out.tfevents.1721904029.Administrator.32136.0

logs/pmrid_l2/events.out.tfevents.1721910184.Administrator.25772.0

logs/pmrid_l2/events.out.tfevents.1721910210.Administrator.35008.0

logs/pmrid_l2/events.out.tfevents.1721910291.Administrator.31576.0

logs/pmrid_l2/events.out.tfevents.1721910440.Administrator.41012.0

logs/pmrid_l2/events.out.tfevents.1721910472.Administrator.22532.0

logs/pmrid_l2/events.out.tfevents.1721910531.Administrator.32472.0

logs/pmrid_l2/events.out.tfevents.1721910697.Administrator.18800.0

logs/pmrid_l2/events.out.tfevents.1721910763.Administrator.40288.0

logs/pmrid_l2/events.out.tfevents.1721910888.Administrator.29528.0

logs/pmrid_l2/events.out.tfevents.1721910917.Administrator.15228.0

logs/pmrid_l2/events.out.tfevents.1721913471.Administrator.28048.0

logs/pmrid_l2/events.out.tfevents.1721916116.Administrator.41796.0

logs/pmrid_l2/events.out.tfevents.1721916604.Administrator.41240.0

logs/pmrid_l2/events.out.tfevents.1721917634.Administrator.12128.0

logs/pmrid_l2/events.out.tfevents.1721918161.Administrator.26620.0

logs/pmrid_l2/events.out.tfevents.1721919426.Administrator.28092.0

logs/pmrid_l2/events.out.tfevents.1721920750.Administrator.38528.0

logs/pmrid_l2/events.out.tfevents.1721920853.Administrator.7036.0

logs/pmrid_l2/events.out.tfevents.1721921473.Administrator.21240.0

logs/pmrid_l2/events.out.tfevents.1721956248.Administrator.14776.0

logs/pmrid_l2/events.out.tfevents.1721956265.Administrator.18468.0

logs/pmrid_l2/events.out.tfevents.1721958274.Administrator.5176.0

logs/pmrid_l2/events.out.tfevents.1721958890.Administrator.2668.0

logs/pmrid_l2/events.out.tfevents.1721960807.Administrator.4868.0

logs/pmrid_l2/events.out.tfevents.1721960831.Administrator.14088.0

logs/pmrid_l2/events.out.tfevents.1721962307.Administrator.152.0

logs/pmrid_l2/events.out.tfevents.1721965027.Administrator.16584.0

logs/pmrid_l2/events.out.tfevents.1721967388.Administrator.12544.0

main.py

from model.pmrid.pmrid_api import PMRID_API import torch if __name__=='__main__': pmrid_api = PMRID_API( 100, 4, 0.0001, 'cuda:0', './logs/pmrid_l2/', './params/pmrid_l2/', 'F:earnPMRID-Pytorch-mainPMRID-Pytorch-mainlist', 'F:earnPMRID-Pytorch-mainPMRID-Pytorch-mainlist', True, './model/pmrid/pmrid_pretrained.ckp' ) # pmrid_api.train_and_value() # train and value pmrid_api.test(r'F:earnPMRID-Pytorch-mainPMRID-Pytorch-mainparamspmrid_l291.ckp','./output/pmrid_l2_dataset/value/right/') # test

model/pmrid/__pycache__/pmrid.cpython-36.pyc

model/pmrid/__pycache__/pmrid.cpython-38.pyc

model/pmrid/__pycache__/pmrid_api.cpython-311.pyc

model/pmrid/__pycache__/pmrid_api.cpython-36.pyc

model/pmrid/__pycache__/pmrid_api.cpython-38.pyc

model/pmrid/__pycache__/utils.cpython-36.pyc

model/pmrid/__pycache__/utils.cpython-38.pyc

model/pmrid/pmrid.py

#!/usr/bin/env python3 import torch import torch.nn as nn from collections import OrderedDict import numpy as np def Conv2D( in_channels: int, out_channels: int, kernel_size: int, stride: int, padding: int, is_seperable: bool = False, has_relu: bool = False, ): modules = OrderedDict() if is_seperable: modules['depthwise'] = nn.Conv2d( in_channels, in_channels, kernel_size, stride, padding, groups=in_channels, bias=False, ) modules['pointwise'] = nn.Conv2d( in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=True, ) else: modules['conv'] = nn.Conv2d( in_channels, out_channels, kernel_size, stride, padding, bias=True, ) if has_relu: modules['relu'] = nn.ReLU() return nn.Sequential(modules) class EncoderBlock(nn.Module): def __init__(self, in_channels: int, mid_channels: int, out_channels: int, stride: int = 1): super().__init__() self.conv1 = Conv2D(in_channels, mid_channels, kernel_size=5, stride=stride, padding=2, is_seperable=True, has_relu=True) self.conv2 = Conv2D(mid_channels, out_channels, kernel_size=5, stride=1, padding=2, is_seperable=True, has_relu=False) self.proj = ( nn.Identity() if stride == 1 and in_channels == out_channels else Conv2D(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, is_seperable=True, has_relu=False) ) self.relu = nn.ReLU() def forward(self, x): proj = self.proj(x) x = self.conv1(x) x = self.conv2(x) x = x + proj return self.relu(x) def EncoderStage(in_channels: int, out_channels: int, num_blocks: int): blocks = [ EncoderBlock( in_channels=in_channels, mid_channels=out_channels//4, out_channels=out_channels, stride=2, ) ] for _ in range(num_blocks-1): blocks.append( EncoderBlock( in_channels=out_channels, mid_channels=out_channels//4, out_channels=out_channels, stride=1, ) ) return nn.Sequential(*blocks) class DecoderBlock(nn.Module): def __init__(self, in_channels: int, out_channels: int, kernel_size: int = 3): super().__init__() padding = kernel_size // 2 self.conv0 = Conv2D( in_channels, out_channels, kernel_size=kernel_size, padding=padding, stride=1, is_seperable=True, has_relu=True, ) self.conv1 = Conv2D( out_channels, out_channels, kernel_size=kernel_size, padding=padding, stride=1, is_seperable=True, has_relu=False, ) def forward(self, x): inp = x x = self.conv0(x) x = self.conv1(x) x = x + inp return x class DecoderStage(nn.Module): def __init__(self, in_channels: int, skip_in_channels: int, out_channels: int): super().__init__() self.decode_conv = DecoderBlock(in_channels, in_channels, kernel_size=3) self.upsample = nn.ConvTranspose2d(in_channels, out_channels, kernel_size=2, stride=2, padding=0) self.proj_conv = Conv2D(skip_in_channels, out_channels, kernel_size=3, stride=1, padding=1, is_seperable=True, has_relu=True) # M.init.msra_normal_(self.upsample.weight, mode='fan_in', nonlinearity='linear') def forward(self, inputs): inp, skip = inputs x = self.decode_conv(inp) x = self.upsample(x) y = self.proj_conv(skip) return x + y class PMRID(nn.Module): def __init__(self): super().__init__() self.conv0 = Conv2D(in_channels=4, out_channels=16, kernel_size=3, padding=1, stride=1, is_seperable=False, has_relu=True) self.enc1 = EncoderStage(in_channels=16, out_channels=64, num_blocks=2) self.enc2 = EncoderStage(in_channels=64, out_channels=128, num_blocks=2) self.enc3 = EncoderStage(in_channels=128, out_channels=256, num_blocks=4) self.enc4 = EncoderStage(in_channels=256, out_channels=512, num_blocks=4) self.encdec = Conv2D(in_channels=512, out_channels=64, kernel_size=3, padding=1, stride=1, is_seperable=True, has_relu=True) self.dec1 = DecoderStage(in_channels=64, skip_in_channels=256, out_channels=64) self.dec2 = DecoderStage(in_channels=64, skip_in_channels=128, out_channels=32) self.dec3 = DecoderStage(in_channels=32, skip_in_channels=64, out_channels=32) self.dec4 = DecoderStage(in_channels=32, skip_in_channels=16, out_channels=16) self.out0 = DecoderBlock(in_channels=16, out_channels=16, kernel_size=3) self.out1 = Conv2D(in_channels=16, out_channels=4, kernel_size=3, stride=1, padding=1, is_seperable=False, has_relu=False) def forward(self, inp): conv0 = self.conv0(inp) conv1 = self.enc1(conv0) conv2 = self.enc2(conv1) conv3 = self.enc3(conv2) conv4 = self.enc4(conv3) conv5 = self.encdec(conv4) up3 = self.dec1((conv5, conv3)) up2 = self.dec2((up3, conv2)) up1 = self.dec3((up2, conv1)) x = self.dec4((up1, conv0)) x = self.out0(x) x = self.out1(x) pred = inp + x return pred if __name__ == "__main__": net = Network() img = torch.randn(1, 4, 64, 64, device=torch.device('cpu'), dtype=torch.float32) out = net(img)

model/pmrid/pmrid_api.py

import os import cv2 import skimage.metrics import numpy as np from typing import Tuple import torch import torch.optim as optim import torch.nn.functional as F from torch.utils.data import Dataset from torch.utils.data import DataLoader from torch.utils.tensorboard import SummaryWriter from model.pmrid.utils import RawUtils from model.pmrid.pmrid import PMRID class KSigma: def __init__(self, k_coeff: Tuple[float, float], b_coeff: Tuple[float, float, float], anchor: float, v: float = 959.0): self.K = np.poly1d(k_coeff) self.Sigma = np.poly1d(b_coeff) self.anchor = anchor self.v = v def __call__(self, img_01, iso: float, inverse=False): k, sigma = self.K(iso), self.Sigma(iso) k_a, sigma_a = self.K(self.anchor), self.Sigma(self.anchor) cvt_k = k_a / k cvt_b = (sigma / (k ** 2) – sigma_a / (k_a ** 2)) * k_a img = img_01 * self.v if not inverse: img = img * cvt_k + cvt_b else: img = (img – cvt_b) / cvt_k return img / self.v class DataProcess(): def __init__(self): self.k_sigma = KSigma( k_coeff=[0.0005995267, 0.00868861], b_coeff=[7.11772e-7, 6.514934e-4, 0.11492713], anchor=1600, ) def pre_process(self, bayer: np.ndarray, iso: float): # normalize bayer = bayer / 255.0 # bayer to rggb rggb = RawUtils.bayer2rggb(bayer) rggb = rggb.clip(0, 1) # padding H, W = rggb.shape[:2] ph, pw = (32-(H % 32))//2, (32-(W % 32))//2 self.ph, self.pw = ph, pw rggb = np.pad(rggb, [(ph, ph), (pw, pw), (0, 0)], 'constant') # transpose rggb = rggb.transpose(2, 0, 1) # ksigma rggb = self.k_sigma(rggb, iso) # inverse normalize rggb = rggb * 255.0 return rggb def post_process(self, rggb: np.ndarray, iso: float): # normalize rggb = rggb / 255.0 # ksigma rggb = self.k_sigma(rggb, iso, inverse = True) # transpose rggb = rggb.transpose(1, 2, 0) # inverse padding ph, pw = self.ph, self.pw rggb = rggb[ph:-ph, pw:-pw] # rggb to bayer bayer = RawUtils.rggb2bayer(rggb) bayer = bayer.clip(0, 1) # inverse normalize bayer = bayer * 255.0 return bayer class PMRIDDataset(Dataset): def __init__(self, filepath, data_process, train = False): self.input_path = [] self.gt_path = [] for line in open(filepath): self.input_path.append(line.split(" ")[0]) self.gt_path.append(line.split(" ")[1]) self.len = len(self.input_path) self.data_process = data_process def __getitem__(self, index): input_iso = 4300 gt_iso = 4300 input_bayer = cv2.imread(self.input_path[index], 0).astype(np.float32) input_rggb = self.data_process.pre_process(input_bayer, input_iso) input_data = torch.from_numpy(input_rggb) gt_bayer = cv2.imread(self.gt_path[index], 0).astype(np.float32) gt_rggb = self.data_process.pre_process(gt_bayer, gt_iso) gt_data = torch.from_numpy(gt_rggb) label = self.input_path[index].split('/')[-1] return input_data, gt_data, input_iso, gt_iso, label def __len__(self): return self.len class PMRID_API(): def __init__(self, epoch, batch_size, learning_rate, device, logs_path, params_path, train_list_path, value_list_path, is_load_pretrained, pretrained_path): # parameters self.epoch = epoch self.batch_size = batch_size self.learning_rate = learning_rate self.device = device self.logs_path = logs_path self.params_path = params_path self.train_list_path = train_list_path self.value_list_path = value_list_path self.is_load_pretrained = is_load_pretrained self.pretrained_path = pretrained_path # data process self.data_process = DataProcess() # data loader train_dataset = PMRIDDataset(self.train_list_path, self.data_process) value_dataset = PMRIDDataset(self.value_list_path, self.data_process) self.train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size) self.value_loader = DataLoader(value_dataset, shuffle=False, batch_size=1) # tensorboard if not os.path.exists(self.logs_path): os.makedirs(self.logs_path) self.writer = SummaryWriter(self.logs_path) # networks if not os.path.exists(self.params_path): os.makedirs(self.params_path) self.pmrid = PMRID() # self.criterion = torch.nn.L1Loss() self.criterion = torch.nn.MSELoss() self.optimizer = optim.Adam(self.pmrid.parameters(), lr=self.learning_rate) def load_weight(self, path): states = torch.load(path) self.pmrid.load_state_dict(states) self.pmrid.to(self.device) print('[load] load finished') def init_weight(self): pass def train(self, epoch): losses = [] for batch_idx, data in enumerate(self.train_loader, 0): inputs, gts, input_iso, gt_iso, label = data inputs, gts = inputs.to(self.device), gts.to(self.device) self.optimizer.zero_grad() outputs = self.pmrid(inputs) loss = self.criterion(outputs, gts) loss.backward() self.optimizer.step() losses.append(loss.item()) print('[train] epoch: %d, batch: %d, loss: %f' % (epoch + 1, batch_idx + 1, loss.item())) mean_loss = np.mean(losses) print('[value] epoch: %d, mean loss: %f' % (epoch + 1, mean_loss)) self.writer.add_scalar('loss', mean_loss, epoch + 1) def value(self, epoch): psnrs = [] ssims = [] with torch.no_grad(): for batch_idx, data in enumerate(self.value_loader, 0): inputs, gts, input_iso, gt_iso, label = data inputs = inputs.to(self.device) gts = inputs.to(self.device) # run pmrid outputs = self.pmrid(inputs) input_rggb = inputs.squeeze().cpu().numpy() input_bayer = self.data_process.post_process(input_rggb, input_iso[0]) / 255.0 gt_rggb = gts.squeeze().cpu().numpy() gt_bayer = self.data_process.post_process(gt_rggb, gt_iso[0]) / 255.0 output_rggb = outputs.squeeze().cpu().numpy() output_bayer = self.data_process.post_process(output_rggb, input_iso[0]) / 255.0 psnr = skimage.metrics.peak_signal_noise_ratio(gt_bayer, output_bayer) data_range = (0, 1) ssim = skimage.metrics.structural_similarity(gt_bayer, output_bayer, data_range=data_range) psnrs.append(float(psnr)) ssims.append(float(ssim)) print(f"input_bayer min: {input_bayer.min()}, max: {input_bayer.max()}") print(f"gt_bayer min: {gt_bayer.min()}, max: {gt_bayer.max()}") print(f"output_bayer min: {output_bayer.min()}, max: {output_bayer.max()}") print('[value] epoch: %d, batch: %d, pnsr: %f, ssim: %f' % (epoch + 1, batch_idx + 1, psnr, ssim)) mean_psnr = np.mean(psnrs) mean_ssim = np.mean(ssims) print(f"input_bayer min: {input_bayer.min()}, max: {input_bayer.max()}") print(f"gt_bayer min: {gt_bayer.min()}, max: {gt_bayer.max()}") print(f"output_bayer min: {output_bayer.min()}, max: {output_bayer.max()}") print('[value] epoch: %d, mean pnsr: %f, mean ssim: %f' % (epoch + 1, mean_psnr, mean_ssim)) self.writer.add_scalar('psnr', mean_psnr, epoch + 1) self.writer.add_scalar('ssim', mean_ssim, epoch + 1) def train_and_value(self): if self.is_load_pretrained: self.load_weight(self.pretrained_path) else: self.init_weight() for epoch in range(self.epoch): if self.is_load_pretrained: pass else: for param_group in self.optimizer.param_groups: param_group['lr'] = self.learning_rate * (0.5 ** (epoch // 20)) self.train(epoch) self.value(epoch) torch.save(self.pmrid.state_dict(), self.params_path+'/'+str(epoch+1)+'.ckp') def test(self, params_path, output_path): self.load_weight(params_path) with torch.no_grad(): for batch_idx, data in enumerate(self.value_loader, 0): inputs, gts, input_iso, gt_iso, label = data inputs = inputs.to(self.device) gts = inputs.to(self.device) # run pmrid outputs = self.pmrid(inputs) output_rggb = outputs.squeeze().cpu().numpy() output_bayer = self.data_process.post_process(output_rggb, input_iso[0]) if not os.path.exists(output_path): os.makedirs(output_path) print('[test] ' +output_path + label[0]) cv2.imwrite(output_path + label[0], output_bayer.astype(np.uint8)) if __name__=='__main__': pmrid_api = PMRID_API( 200, 10, 0.01, 'cuda:0', './logs/pmrid_l2/', './params/pmrid_l2/', './data/right_value_list.txt', './data/right_value_list.txt', True, './model/pmrid/pmrid_pretrained.ckp' ) pmrid_api.train_and_value()

model/pmrid/pmrid_pretrained.ckp

archive/data.pkl

archive/data/1965483200

archive/data/1965483296

archive/data/1990042896

archive/data/1990042992

archive/data/1999848880

archive/data/43483136

archive/data/81139184

archive/data/81213264

archive/data/81369568

archive/data/82205264

archive/data/82227056

archive/data/82426176

archive/data/82538720

archive/data/82559648

archive/data/82567504

archive/data/82694800

archive/data/82696528

archive/data/82703760

archive/data/82716240

archive/data/82769072

archive/data/82769328

archive/data/82773904

archive/data/82775680

archive/data/82800544

archive/data/82801056

archive/data/82801568

archive/data/82804000

archive/data/82875536

archive/data/82876432

archive/data/82890960

archive/data/82907984

archive/data/82908496

archive/data/82913424

archive/data/82930448

archive/data/82931344

archive/data/82945872

archive/data/82979280

archive/data/82979920

archive/data/82988048

archive/data/83054224

archive/data/83055632

archive/data/83061968

archive/data/83193680

archive/data/83195088

archive/data/83222416

archive/data/83288592

archive/data/83289232

archive/data/83297360

archive/data/83363536

archive/data/83364944

archive/data/83392272

archive/data/83458448

archive/data/83459088

archive/data/83467216

archive/data/83533392

archive/data/83534800

archive/data/83562128

archive/data/83628304

archive/data/83628944

archive/data/83637072

archive/data/83637584

archive/data/83638992

archive/data/83666320

archive/data/83666832

archive/data/83667728

archive/data/83682256

archive/data/83682768

archive/data/83685040

archive/data/83685200

archive/data/83686800

archive/data/83687312

archive/data/83689744

archive/data/83691344

archive/data/83691856

archive/data/83692752

archive/data/83694352

archive/data/83694864

archive/data/85278512

archive/data/85331504

archive/data/85594288

archive/data/85595184

archive/data/85609712

archive/data/85872496

archive/data/85874928

archive/data/85927856

archive/data/86190640

archive/data/86191536

archive/data/86206064

archive/data/86468848

archive/data/86471280

archive/data/86491440

archive/data/86623152

archive/data/86623792

archive/data/86628352

archive/data/86645920

archive/data/86666880

archive/data/86667520

archive/data/86746352

archive/data/86812528

archive/data/86813168

archive/data/86817200

archive/data/86834224

archive/data/86834864

archive/data/86838896

archive/data/86855920

archive/data/86856560

archive/data/86891056

archive/data/86891568

archive/data/86897488

archive/data/86914512

archive/data/86915024

archive/data/86917904

archive/data/86922640

archive/data/86923152

archive/data/86926032

archive/data/86930768

archive/data/86931280

archive/data/86949392

archive/data/86949904

archive/data/86953520

archive/data/86962352

archive/data/86962864

archive/data/86965744

archive/data/86970480

archive/data/86970992

archive/data/86973872

archive/data/86978608

archive/data/86979120

archive/data/86989040

archive/data/86989488

archive/data/86991376

archive/data/86993040

archive/data/86993488

archive/data/86995792

archive/data/86997456

archive/data/86997904

archive/data/87000208

archive/data/87001872

archive/data/87002320

archive/data/87006944

archive/data/87010448

archive/version

3

model/pmrid/utils.py

#!/usr/bin/env python3 import cv2 import numpy as np class RawUtils: @classmethod def bggr2rggb(cls, *bayers): res = [] for bayer in bayers: res.append(bayer[::-1, ::-1]) if len(res) == 1: return res[0] return res @classmethod def rggb2bggr(cls, *bayers): return cls.bggr2rggb(*bayers) @classmethod def bayer2rggb(cls, *bayers): res = [] for bayer in bayers: H, W = bayer.shape res.append( bayer.reshape(H//2, 2, W//2, 2) .transpose(0, 2, 1, 3) .reshape(H//2, W//2, 4) ) if len(res) == 1: return res[0] return res @classmethod def rggb2bayer(cls, *rggbs): res = [] for rggb in rggbs: H, W, _ = rggb.shape res.append( rggb.reshape(H, W, 2, 2) .transpose(0, 2, 1, 3) .reshape(H*2, W*2) ) if len(res) == 1: return res[0] return res @classmethod def bayer2rgb(cls, *bayer_01s, wb_gain, CCM, gamma=2.2): wb_gain = np.array(wb_gain)[[0, 1, 1, 2]] res = [] for bayer_01 in bayer_01s: bayer = cls.rggb2bayer( (cls.bayer2rggb(bayer_01) * wb_gain).clip(0, 1) ).astype(np.float32) bayer = np.round(np.ascontiguousarray(bayer) * 65535).clip(0, 65535).astype(np.uint16) rgb = cv2.cvtColor(bayer, cv2.COLOR_BAYER_BG2RGB_EA).astype(np.float32) / 65535 rgb = rgb.dot(np.array(CCM).T).clip(0, 1) rgb = rgb ** (1/gamma) res.append(rgb.astype(np.float32)) if len(res) == 1: return res[0] return res # vim: ts=4 sw=4 sts=4 expandtab

output/pmrid_l2_dataset/value/right/1_0.png

output/pmrid_l2_dataset/value/right/1_1.png

output/pmrid_l2_dataset/value/right/1_10.png

output/pmrid_l2_dataset/value/right/1_100.png

output/pmrid_l2_dataset/value/right/1_101.png

output/pmrid_l2_dataset/value/right/1_102.png

output/pmrid_l2_dataset/value/right/1_103.png

output/pmrid_l2_dataset/value/right/1_104.png

output/pmrid_l2_dataset/value/right/1_105.png

output/pmrid_l2_dataset/value/right/1_106.png

output/pmrid_l2_dataset/value/right/1_107.png

output/pmrid_l2_dataset/value/right/1_108.png

output/pmrid_l2_dataset/value/right/1_109.png

output/pmrid_l2_dataset/value/right/1_11.png

output/pmrid_l2_dataset/value/right/1_110.png

output/pmrid_l2_dataset/value/right/1_111.png

output/pmrid_l2_dataset/value/right/1_112.png

output/pmrid_l2_dataset/value/right/1_113.png

output/pmrid_l2_dataset/value/right/1_114.png

output/pmrid_l2_dataset/value/right/1_115.png

output/pmrid_l2_dataset/value/right/1_116.png

output/pmrid_l2_dataset/value/right/1_117.png

output/pmrid_l2_dataset/value/right/1_118.png

output/pmrid_l2_dataset/value/right/1_119.png

output/pmrid_l2_dataset/value/right/1_12.png

output/pmrid_l2_dataset/value/right/1_120.png

output/pmrid_l2_dataset/value/right/1_121.png

output/pmrid_l2_dataset/value/right/1_122.png

output/pmrid_l2_dataset/value/right/1_123.png

output/pmrid_l2_dataset/value/right/1_124.png

output/pmrid_l2_dataset/value/right/1_125.png

output/pmrid_l2_dataset/value/right/1_126.png

output/pmrid_l2_dataset/value/right/1_127.png

output/pmrid_l2_dataset/value/right/1_13.png

output/pmrid_l2_dataset/value/right/1_14.png

output/pmrid_l2_dataset/value/right/1_15.png

output/pmrid_l2_dataset/value/right/1_16.png

output/pmrid_l2_dataset/value/right/1_17.png

output/pmrid_l2_dataset/value/right/1_18.png

output/pmrid_l2_dataset/value/right/1_19.png

output/pmrid_l2_dataset/value/right/1_2.png

output/pmrid_l2_dataset/value/right/1_20.png

output/pmrid_l2_dataset/value/right/1_21.png

output/pmrid_l2_dataset/value/right/1_22.png

output/pmrid_l2_dataset/value/right/1_23.png

output/pmrid_l2_dataset/value/right/1_24.png

output/pmrid_l2_dataset/value/right/1_25.png

output/pmrid_l2_dataset/value/right/1_26.png

output/pmrid_l2_dataset/value/right/1_27.png

output/pmrid_l2_dataset/value/right/1_28.png

output/pmrid_l2_dataset/value/right/1_29.png

output/pmrid_l2_dataset/value/right/1_3.png

output/pmrid_l2_dataset/value/right/1_30.png

output/pmrid_l2_dataset/value/right/1_31.png

output/pmrid_l2_dataset/value/right/1_32.png

output/pmrid_l2_dataset/value/right/1_33.png

output/pmrid_l2_dataset/value/right/1_34.png

output/pmrid_l2_dataset/value/right/1_35.png

output/pmrid_l2_dataset/value/right/1_36.png

output/pmrid_l2_dataset/value/right/1_37.png

output/pmrid_l2_dataset/value/right/1_38.png

output/pmrid_l2_dataset/value/right/1_39.png

output/pmrid_l2_dataset/value/right/1_4.png

output/pmrid_l2_dataset/value/right/1_40.png

output/pmrid_l2_dataset/value/right/1_41.png

output/pmrid_l2_dataset/value/right/1_42.png

output/pmrid_l2_dataset/value/right/1_43.png

output/pmrid_l2_dataset/value/right/1_44.png

output/pmrid_l2_dataset/value/right/1_45.png

output/pmrid_l2_dataset/value/right/1_46.png

output/pmrid_l2_dataset/value/right/1_47.png

output/pmrid_l2_dataset/value/right/1_48.png

output/pmrid_l2_dataset/value/right/1_49.png

output/pmrid_l2_dataset/value/right/1_5.png

output/pmrid_l2_dataset/value/right/1_50.png

output/pmrid_l2_dataset/value/right/1_51.png

output/pmrid_l2_dataset/value/right/1_52.png

output/pmrid_l2_dataset/value/right/1_53.png

output/pmrid_l2_dataset/value/right/1_54.png

output/pmrid_l2_dataset/value/right/1_55.png

output/pmrid_l2_dataset/value/right/1_56.png

output/pmrid_l2_dataset/value/right/1_57.png

output/pmrid_l2_dataset/value/right/1_58.png

output/pmrid_l2_dataset/value/right/1_59.png

output/pmrid_l2_dataset/value/right/1_6.png

output/pmrid_l2_dataset/value/right/1_60.png

output/pmrid_l2_dataset/value/right/1_61.png

output/pmrid_l2_dataset/value/right/1_62.png

output/pmrid_l2_dataset/value/right/1_63.png

output/pmrid_l2_dataset/value/right/1_64.png

output/pmrid_l2_dataset/value/right/1_65.png

output/pmrid_l2_dataset/value/right/1_66.png

output/pmrid_l2_dataset/value/right/1_67.png

output/pmrid_l2_dataset/value/right/1_68.png

output/pmrid_l2_dataset/value/right/1_69.png

output/pmrid_l2_dataset/value/right/1_7.png

output/pmrid_l2_dataset/value/right/1_70.png

output/pmrid_l2_dataset/value/right/1_71.png

output/pmrid_l2_dataset/value/right/1_72.png

output/pmrid_l2_dataset/value/right/1_73.png

output/pmrid_l2_dataset/value/right/1_74.png

output/pmrid_l2_dataset/value/right/1_75.png

output/pmrid_l2_dataset/value/right/1_76.png

output/pmrid_l2_dataset/value/right/1_77.png

output/pmrid_l2_dataset/value/right/1_78.png

output/pmrid_l2_dataset/value/right/1_79.png

output/pmrid_l2_dataset/value/right/1_8.png

output/pmrid_l2_dataset/value/right/1_80.png

output/pmrid_l2_dataset/value/right/1_81.png

output/pmrid_l2_dataset/value/right/1_82.png

output/pmrid_l2_dataset/value/right/1_83.png

output/pmrid_l2_dataset/value/right/1_84.png

output/pmrid_l2_dataset/value/right/1_85.png

output/pmrid_l2_dataset/value/right/1_86.png

output/pmrid_l2_dataset/value/right/1_87.png

output/pmrid_l2_dataset/value/right/1_88.png

output/pmrid_l2_dataset/value/right/1_89.png

output/pmrid_l2_dataset/value/right/1_9.png

output/pmrid_l2_dataset/value/right/1_90.png

output/pmrid_l2_dataset/value/right/1_91.png

output/pmrid_l2_dataset/value/right/1_92.png

output/pmrid_l2_dataset/value/right/1_93.png

output/pmrid_l2_dataset/value/right/1_94.png

output/pmrid_l2_dataset/value/right/1_95.png

output/pmrid_l2_dataset/value/right/1_96.png

output/pmrid_l2_dataset/value/right/1_97.png

output/pmrid_l2_dataset/value/right/1_98.png

output/pmrid_l2_dataset/value/right/1_99.png

params/pmrid_l2/1.ckp

params/pmrid_l2/10.ckp

params/pmrid_l2/11.ckp

params/pmrid_l2/12.ckp

params/pmrid_l2/13.ckp

params/pmrid_l2/14.ckp

params/pmrid_l2/15.ckp

params/pmrid_l2/16.ckp

params/pmrid_l2/17.ckp

params/pmrid_l2/18.ckp

params/pmrid_l2/19.ckp

params/pmrid_l2/2.ckp

params/pmrid_l2/20.ckp

params/pmrid_l2/21.ckp

params/pmrid_l2/22.ckp

params/pmrid_l2/23.ckp

params/pmrid_l2/24.ckp

params/pmrid_l2/25.ckp

params/pmrid_l2/26.ckp

params/pmrid_l2/27.ckp

params/pmrid_l2/28.ckp

params/pmrid_l2/29.ckp

params/pmrid_l2/3.ckp

params/pmrid_l2/30.ckp

params/pmrid_l2/31.ckp

params/pmrid_l2/32.ckp

params/pmrid_l2/33.ckp

params/pmrid_l2/34.ckp

params/pmrid_l2/35.ckp

params/pmrid_l2/36.ckp

params/pmrid_l2/37.ckp

params/pmrid_l2/38.ckp

params/pmrid_l2/39.ckp

params/pmrid_l2/4.ckp

params/pmrid_l2/40.ckp

params/pmrid_l2/41.ckp

params/pmrid_l2/42.ckp

params/pmrid_l2/43.ckp

params/pmrid_l2/44.ckp

params/pmrid_l2/45.ckp

params/pmrid_l2/46.ckp

params/pmrid_l2/47.ckp

params/pmrid_l2/48.ckp

params/pmrid_l2/49.ckp

params/pmrid_l2/5.ckp

params/pmrid_l2/50.ckp

params/pmrid_l2/51.ckp

params/pmrid_l2/52.ckp

params/pmrid_l2/53.ckp

params/pmrid_l2/54.ckp

params/pmrid_l2/55.ckp

params/pmrid_l2/56.ckp

params/pmrid_l2/57.ckp

params/pmrid_l2/58.ckp

params/pmrid_l2/59.ckp

params/pmrid_l2/6.ckp

params/pmrid_l2/60.ckp

params/pmrid_l2/61.ckp

params/pmrid_l2/62.ckp

params/pmrid_l2/63.ckp

params/pmrid_l2/64.ckp

params/pmrid_l2/65.ckp

params/pmrid_l2/66.ckp

params/pmrid_l2/67.ckp

params/pmrid_l2/68.ckp

params/pmrid_l2/69.ckp

params/pmrid_l2/7.ckp

params/pmrid_l2/70.ckp

params/pmrid_l2/71.ckp

params/pmrid_l2/72.ckp

params/pmrid_l2/73.ckp

params/pmrid_l2/74.ckp

params/pmrid_l2/75.ckp

params/pmrid_l2/76.ckp

params/pmrid_l2/77.ckp

params/pmrid_l2/78.ckp

params/pmrid_l2/79.ckp

params/pmrid_l2/8.ckp

params/pmrid_l2/80.ckp

params/pmrid_l2/81.ckp

params/pmrid_l2/82.ckp

params/pmrid_l2/83.ckp

params/pmrid_l2/84.ckp

params/pmrid_l2/85.ckp

params/pmrid_l2/86.ckp

params/pmrid_l2/87.ckp

params/pmrid_l2/88.ckp

params/pmrid_l2/89.ckp

params/pmrid_l2/9.ckp

params/pmrid_l2/90.ckp

params/pmrid_l2/91.ckp

poit.py

import cv2 import os import numpy as np # Set the directory containing the images image_dir = "./1/" out = './2/' # Loop through all the files in the directory for filename in os.listdir(image_dir): # Check if the file is an image if filename.endswith(".png") or filename.endswith(".jpg"): # Read the image image_path = os.path.join(image_dir, filename) image = cv2.imread(image_path) # Set the amount of salt-and-pepper noise s_vs_p = 0.5 amount = 0.04 # Create a copy of the image noisy_img = np.copy(image) # Add salt noise num_salt = np.ceil(amount * image.size * s_vs_p) coords = [np.random.randint(0, i – 1, int(num_salt)) for i in image.shape] noisy_img[coords[0], coords[1], :] = [255, 255, 255] # Add pepper noise num_pepper = np.ceil(amount * image.size * (1. – s_vs_p)) coords = [np.random.randint(0, i – 1, int(num_pepper)) for i in image.shape] noisy_img[coords[0], coords[1], :] = [0, 0, 0] # Save the noisy image noisy_image_path = os.path.join(out, "noisy_" + filename) cv2.imwrite(noisy_image_path, noisy_img)

README.md

# PMRID-Pytorch this is a project for training and testing PMRID some code copy from [megvii-research/PMRID](https://github.com/megvii-research/PMRID) generate_list.py is for generate list for trainning main.py is for training and testing dataset

try.py

import cv2 import numpy as np # 读取原始图像和去噪后的图像 noisy_image = r'F:earnEDCNN-masterEDCNN-masterdemo2noisy_1_0.png' noisy_image = cv2.imread(noisy_image) gray_image = cv2.cvtColor(noisy_image, cv2.COLOR_BGR2GRAY) denoised_image = cv2.medianBlur(gray_image, 5) # 5表示核的大小,可以根据需要调整 # 计算 PSNR psnr = cv2.PSNR(gray_image, denoised_image) print(f"PSNR: {psnr:.2f} dB") # 计算 SSIM def ssim(img1, img2): C1 = (0.01 * 255)**2 C2 = (0.03 * 255)**2 img1 = img1.astype(np.float64) img2 = img2.astype(np.float64) kernel = cv2.getGaussianKernel(11, 1.5) window = np.outer(kernel, kernel.transpose()) mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] mu1_sq = mu1**2 mu2_sq = mu2**2 mu1_mu2 = mu1 * mu2 sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] – mu1_sq sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] – mu2_sq sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] – mu1_mu2 ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)) return ssim_map.mean() ssim_score = ssim(gray_image, denoised_image) print(f"SSIM: {ssim_score:.4f}") # 显示图像 cv2.imshow('Original Image', noisy_image) cv2.imshow('Denoised Image', denoised_image) cv2.waitKey(0) cv2.destroyAllWindows() # import os # import cv2 # import numpy as np # import torch # import torch.nn as nn # import torch.optim as optim # from torch.utils.data import Dataset, DataLoader # from Dncnn import DnCNN # # # class DenoisingDataset(Dataset): # def __init__(self, data_dir, patch_size=40, stride=10): # self.data_dir = data_dir # self.patch_size = patch_size # self.stride = stride # self.file_list = os.listdir(data_dir) # # def __len__(self): # return len(self.file_list) # # def __getitem__(self, idx): # file_name = self.file_list[idx] # img_path = os.path.join(self.data_dir, file_name) # img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE) # # # 从原始图像中提取小块patch # patches = [] # for i in range(0, img.shape[0] – self.patch_size + 1, self.stride): # for j in range(0, img.shape[1] – self.patch_size + 1, self.stride): # patch = img[i:i + self.patch_size, j:j + self.patch_size] # patches.append(patch) # # # 将patch转换为Tensor # patches = np.stack(patches, axis=0) # patches = torch.from_numpy(patches).float().unsqueeze(1) # # # 添加高斯噪声 # noise = torch.randn_like(patches) * 25 # noisy_patches = patches + noise # # return noisy_patches, patches # # def train_dncnn(dataset, model, device, num_epochs=100, batch_size=128, lr=0.001): # dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True, num_workers=4) # criterion = nn.MSELoss() # optimizer = optim.Adam(model.parameters(), lr=lr) # # for epoch in range(num_epochs): # for i, (noisy_patches, clean_patches) in enumerate(dataloader): # noisy_patches = noisy_patches.to(device) # clean_patches = clean_patches.to(device) # # optimizer.zero_grad() # output = model(noisy_patches) # loss = criterion(output, clean_patches) # loss.backward() # optimizer.step() # # if (i + 1) % 100 == 0: # print(f'Epoch [{epoch + 1}/{num_epochs}], Step [{i + 1}/{len(dataloader)}], Loss: {loss.item():.4f}') # # return model # # # 创建 DnCNN 模型 # model = DnCNN(depth=17, n_channels=64, image_channels=1) # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # model.to(device) # # # 创建数据集和训练模型 # dataset = DenoisingDataset(r'F:earnEDCNN-masterEDCNN-masterdemo2') # trained_model = train_dncnn(dataset, model, device) # # # 保存训练好的模型 # torch.save(trained_model.state_dict(), 'F:earnPMRID-Pytorch-mainPMRID-Pytorch-main/model.pth')

,

TRAIN001/oct.mhd

ObjectType = Image NDims = 3 BinaryData = True BinaryDataByteOrderMSB = False CompressedData = False TransformMatrix = 1 0 0 0 1 0 0 0 1 CenterOfRotation = 0 0 0 AnatomicalOrientation = RAI Offset = 0 0 0 ElementSpacing = 0.011742 0.001955 0.046878 DimSize = 512 1024 128 ElementNumberOfChannels = 1 ElementType = MET_UCHAR ElementDataFile = oct.raw

TRAIN001/oct.raw

TRAIN001/reference.mhd

ObjectType = Image NDims = 3 BinaryData = True BinaryDataByteOrderMSB = False CompressedData = False TransformMatrix = 1 0 0 0 1 0 0 0 1 CenterOfRotation = 0 0 0 AnatomicalOrientation = RAI Offset = 0 0 0 ElementSpacing = 0.011742 0.001955 0.046878 DimSize = 512 1024 128 ElementNumberOfChannels = 1 ElementType = MET_UCHAR ElementDataFile = reference.raw

TRAIN001/reference.raw

,

TRAIN003/oct.mhd

ObjectType = Image NDims = 3 BinaryData = True BinaryDataByteOrderMSB = False CompressedData = False TransformMatrix = 1 0 0 0 1 0 0 0 1 CenterOfRotation = 0 0 0 AnatomicalOrientation = RAI Offset = 0 0 0 ElementSpacing = 0.011742 0.001955 0.047244 DimSize = 512 1024 128 ElementNumberOfChannels = 1 ElementType = MET_UCHAR ElementDataFile = oct.raw

TRAIN003/oct.raw

TRAIN003/reference.mhd

ObjectType = Image NDims = 3 BinaryData = True BinaryDataByteOrderMSB = False CompressedData = False TransformMatrix = 1 0 0 0 1 0 0 0 1 CenterOfRotation = 0 0 0 AnatomicalOrientation = RAI Offset = 0 0 0 ElementSpacing = 0.011742 0.001955 0.047244 DimSize = 512 1024 128 ElementNumberOfChannels = 1 ElementType = MET_UCHAR ElementDataFile = reference.raw

TRAIN003/reference.raw

,

TRAIN002/oct.mhd

ObjectType = Image NDims = 3 BinaryData = True BinaryDataByteOrderMSB = False CompressedData = False TransformMatrix = 1 0 0 0 1 0 0 0 1 CenterOfRotation = 0 0 0 AnatomicalOrientation = RAI Offset = 0 0 0 ElementSpacing = 0.011742 0.001955 0.047244 DimSize = 512 1024 128 ElementNumberOfChannels = 1 ElementType = MET_UCHAR ElementDataFile = oct.raw

TRAIN002/oct.raw

TRAIN002/reference.mhd

ObjectType = Image NDims = 3 BinaryData = True BinaryDataByteOrderMSB = False CompressedData = False TransformMatrix = 1 0 0 0 1 0 0 0 1 CenterOfRotation = 0 0 0 AnatomicalOrientation = RAI Offset = 0 0 0 ElementSpacing = 0.011742 0.001955 0.047244 DimSize = 512 1024 128 ElementNumberOfChannels = 1 ElementType = MET_UCHAR ElementDataFile = reference.raw

TRAIN002/reference.raw

,

TEST002/oct.mhd

ObjectType = Image NDims = 3 BinaryData = True BinaryDataByteOrderMSB = False CompressedData = False TransformMatrix = 1 0 0 0 1 0 0 0 1 CenterOfRotation = 0 0 0 AnatomicalOrientation = RAI Offset = 0 0 0 ElementSpacing = 0.011742 0.001955 0.047244 DimSize = 512 1024 128 ElementNumberOfChannels = 1 ElementType = MET_UCHAR ElementDataFile = oct.raw

TEST002/oct.raw

,

TEST003/oct.mhd

ObjectType = Image NDims = 3 BinaryData = True BinaryDataByteOrderMSB = False CompressedData = False TransformMatrix = 1 0 0 0 1 0 0 0 1 CenterOfRotation = 0 0 0 AnatomicalOrientation = RAI Offset = 0 0 0 ElementSpacing = 0.011742 0.001955 0.047244 DimSize = 512 1024 128 ElementNumberOfChannels = 1 ElementType = MET_UCHAR ElementDataFile = oct.raw

TEST003/oct.raw

,

TEST001/oct.mhd

ObjectType = Image NDims = 3 BinaryData = True BinaryDataByteOrderMSB = False CompressedData = False TransformMatrix = 1 0 0 0 1 0 0 0 1 CenterOfRotation = 0 0 0 AnatomicalOrientation = RAI Offset = 0 0 0 ElementSpacing = 0.011742 0.001955 0.046878 DimSize = 512 1024 128 ElementNumberOfChannels = 1 ElementType = MET_UCHAR ElementDataFile = oct.raw

TEST001/oct.raw