Image Tab

The image tab houses image related operations including polygon extraction.

imagebar

The following tools are available in the Image tab:

Layer Merge

layermerge

The intent of the layer merge is to make one cohesive layer out of two or more existing layers. Turn on the layers to merge, and make sure all other layers are off. Do this by toggling the visibility checkmark in the Layers window. (See Layer Window)

Select Layer Merge and the window will pop up.

fusewindow

The Create Merge button will make a new layer without modifying the existing layers.

Filter Layer

filterlayer

Selecting the Filter Layer icon pops up the following window:

filterwindow

Select the Layer to filter in the drop down menu.

Use a profile listed. The profiles are created using Extract Polygons. See Extract Polygons

Click Filter to apply the profile to the layer. This will overwrite the viewing information (warped directory) so the tiles look filtered in Pix2Net, yet the original data is maintained.

Transform Layers

transformbutton

The user can flip or rotate an entire image layer by using Transform Layers. This is useful if a layer is imaged in the wrong orientation, or you would like to add a backside image to a data set that was imaged topside.

Clicking the Transform Layers button opens the Transform Layers Window:

tranformlayerswindow

Click on the green + to pull up the Add Transformation Window:

addtransformationmenu

The user may select between:
  1. Flip Left to Right
  2. Flip Top to Bottom
  3. Rotate 90deg Clockwise
  4. Rotate 90deg Counter-Clockwise

dothetransformation

Click Transform and all of the transformations will be performed in order for the selected layers, and a new layer will be created.

Extract Polygons

extractpolygons

Click the Extract Polygons Icon to open the Feature Extraction window.

featureextraction

The upper left icon loadsettings allows the user to load settings from another layer. This can help give a baseline for the filters to apply.

To save the settings, click on the Save a copy of this profile icon saveprofile. Enter a name for the profile and click Save. Note, the extraction settings used are automatically saved.

For via extraction, the user can add an image to the training set by clicking this icon addtotrainingset. Enter the name of the training set, or add the image to an existing training set. The Manage Training Images Window will automatically pop up.

To manage the training sets, click on the Manage training sets icon managetrainingicon. This will open the Manage Training Images Window:

mangaetrainingwindow

To view the via report (for edge detection via extraction) click on the icon viewviareport. Select between training sets to view the desired report.

The save icon savesettings allows the user to save a copy of the image. The user can save the original,filtered, or edges of the image.

extractionprofile

The Target layer is the layer that the feature extraction will be applied to. The user may want multiple extractions from the same dataset, i.e. vias and lines extracted from the same images.

The Source layer is the image set used for the extraction.

The Extract menu allows the user to pick between extracting Lines or Vias. Via extraction assumes bright white vias in the image.

Line Extraction

linesettings

Min polygon size is used to filter out all noise with a width or length smaller than the value entered

Smoothness sets the amount of pixel deviation that will be “smoothed” out when extracting the polygons. The higher this number is, the fewer points drawn for each polygon. If the number is too high, notches or jut outs will be lost in the extraction.

Allow holes is checked when holes should be allowed in metal, and left unchecked when the holes should be filled in due to delayering artifacts.

polygonsimplification

There are two methods of polygon simplification, Smooth and Straighten. Smooth is pretty straightforward, the higher the smoothness the fewer points in each polygon. Keep an eye on the preview, because there will be a threshold where there are too few points and the polygons will not make good connection with other layers.

Straighten allows the user to extract polygons to look more orthogonal. The user inputs the minimum polygon width, and can choose the standard option or the custom option. HIghlighting each category will provide a brief description in the bottom left of the Feature Extraction window.

polygonstraighten

Here the user can adjust the smoothness, noise reduction, noise threshold, square corners, and minimum spacing.

fillblocksettings

Fill block settings are for fill patterns containing repeating rectangles of the uniform size.

Identify fill blocks is checked to identify fill and place it on its own separate layer.

Expected width is the width of an average fill block in um.

Expected height is the height of an average fill block in um.

Bounds tolerance describes the percentage of deviation allowed in the length and width of a polygon to be considered as fill.

Deviation allowed describes the percentage of deviation allowed in the calculated area of the fill blocks. (Meant to help when fill block shape is slightly irregular due to deprocessing)

Add fill blocks to allows selection of a new or existing layer to add the fill block polygons to.

Via Extraction

Method- Choose between edge detection and neural network.

viaextraction

Edge Detection

This algorithm attempts to identify vias by tracing the edge around white dots in the image. The following parameters are used in this method:

Via diameter- This is the expected diameter of each via, in pixels.

Size tolerance- This is the percentage that the diameter is allowed to vary by. For example, if the via diameter is 20, and the size tolerance is 10%, then vias with diameters ranging from 18 to 22 will be identified.

Start edge- This is the minimum edge intensity pix2net will consider as it looks for places to start tracing an edge that may turn out to be via. If this value is too high, then Pix2Net will miss vias that are very faint. If this value is too low, then it will take longer to extract vias, because the number of edges that Pix2Net has to trace will increase.

Average edge- This is the minimum average edge intensity that a via must have in order for it to be considered valid.

Circle similarity- This is how similar the shape of a via must be to a circle in order to be considered valid. 0% means that all via shapes are allowed; 100% means that only perfectly circular vias are allowed.

Brightness- This is how bright a via must be in order to be considered valid. 0% means that any via brightness is allowed; 100% means that only bright white vias are allowed.

Output via size- This is the size of the via polygon that will be placed in pixels.

Drop floating vias- When checked, select the metal layers above and below the via layer to automatically drop any floating (not making contact with both metal layers) vias detected.

How to extract vias using Edge Detection:

The best way to determine the correct values for these thresholds is to use the “Show via report” feature. To do so, you must create a training image. Here are the steps:

  1. Click ‘Show polygons’. This will extract an initial set of vias with the current settings.
  2. Click the “Add image to training set” button. Enter a name for the new training set, and then click “Ok”.
  3. The “Manage Training Images” dialog will appear. Make sure that the correct “Training Set” and “Training Image” is selected. Use the “Add point” and “Remove points” tools to add missing vias and remove extra vias. (Note: You can’t use “Undo” or “Redo” here). When you’re finished, click “Close”.
  4. Click the ‘Show via report” button. Make sure that you have the correct image selected, and then click “Ok”.

The via report will list three statistics at the top: “Valid” is the number of vias that were correctly identified, “Missing” is the number of vias that were not extracted, and “Extra” is the number of edges that were incorrectly identified as vias.

For each entry in the table, you will see its type (valid, missing, or extra), its location in pixel coordinates, its size (the via diameter), the edge intensity, the circle similarity, and the brightness. When you click on an entry, you will see three views of the same area: The original image, the filtered image, and the edge intensity image. You will also see an “Edge points” list, which is the edge intensity at each point along the via’s boundary.

If you want to figure out a good brightness threshold, for example, you can click the “Brightness” column to sort the entries on brightness. If you notice that all of the vias with a brightness below 70% are “Extra”, and all of the vias with a brightness above 70% are “Valid”, then 70% is definitely a good value to use for a “Brightness” threshold.

vianeuralnetworksettings

Neural Network

The new method for extracting vias is to use neural networks. The nice thing about neural networks is that you do not have to manually specify thresholds; you simply have to create training images.

To create a training image, follow these steps:

  1. With the “Method” to “Edge Detection”, click ‘Show polygons’. This will extract an initial set of vias with the current settings. Make sure that you note the diameter of the via here, because you will need that information in step 6. If your image contains black vias, you may choose to use an “Invert” filter for this step, because the “Edge detection” method only detects white vias.

  2. Click the “Add image to training set” button. Enter a name for the new training set, and then click “Ok”.

  3. The “Manage Training Images” dialog will appear. Make sure that the correct “Training Set” and “Training Image” is selected. Use the “Add point” and “Remove points” tools to add missing vias and remove extra vias. (Note: You can’t use “Undo” or “Redo” here). When you’re finished, click “Close”.

  4. Click “Cancel” to close the Feature Extraction window.

  5. Click the “Neural Networks” button to open the Mange Neural Networks dialog. Click “Add Network”.

  6. Set the “Via diameter” to the same value that you used in step 1. Click the add button next to “Training Sets” to add the training set that you created. On the right side of the window, you can now cycle through every via in each image of the training set. You should now adjust the “Downsample” and “Patch size” parameters:

    Downsample is the number of times the image on the right will be zoomed out. You should choose the zoom level that makes each via as easily recognizable as possible.

    Patch size is the size of the image on the right. You should try to make the patch size as small as possible (so that the neural network runs as quickly as possible), without making the patch size so small that the vias are no longer easily recognizable.

    When you’re finished tweaking the settings, click “Create”.

  7. You have created the network, but now you need to train it. Select the network and then click the “Start training” button. The neural network will be trained in the background. When you start training, the vias will be randomly divided into a “training set” and a “testing set”. The “Training accuracy” is the network’s accuracy on the training images, and the “Testing accuracy” is the network’s accuracy on the testing images. In general, the “Testing accuracy” is the most useful statistic, because that measures how accurate the network is on via images that it has never seen before. The training will stop when the number of epochs reaches 10. (An epoch is a single pass through the entire training set).

  8. When the network is finished training, click the “View samples” button to visually inspect the results. You will see the following columns for each entry:

    Sample - A unique number for each entry in the table Type - “Training” if the sample was in the training set, and “Testing” if the sample was in the testing set.

    Confidence - A percentage, from 0 to 100, the describes how confident the network is that it labeled the sample correctly.

    Correct - This is true if “Network label” matches “User label”.

    Network label - The label (“via” or “non-via”) that the neural network chose for this sample.

    User label - The label (“via” or “non-via”) that the user specified for this sample in the training image.

    Click “Close” to close the dialog.

  9. Click “Close” to close the Manage Neural Networks dialog. Click “Extract Polygons”. Change the “Method” to “Neural Networks”, and set “Neural Network” to the network you just added. If you used an “Invert” filter in step 1, then you should now remove that “Invert” filter, because the neural network has been trained on the raw, unfiltered image, so that is what it expects as its input.

  10. Click “Show polygons” to extract the vias using the neural network. Don’t worry if vias near the edge were not identified; the neural network will not try to identify vias at the edge, because it does not have enough context. If you notice that some vias are missing, but the “Testing accuracy” was very high, then the problem is probably not the neural network; the problem is probably in the algorithm that determines which image patches are possible vias that should be passed to the neural network. This is a known issue that will be fixed in the future.

previewsettings

Tile- tile being previewed in the four image panes to the right

Show polyogns- toggles the Polygons pane on or off.

Polygon color- click the square to select the preferred preview color

Fill color- click the square to select a color for the fill blocks when using identify fill blocks

The center section houses the applied filters for extraction. To add a filter click the green plus sign and the Add Filter window will appear:

addfilterwindow

The filters are sorted by four types. Once the desired filter is added, highlight the filter to change the settings.

Pixel Transforms- filters that apply a formula to each pixel regardless of the surrounding pixels.

Brightness

brightness1 brightness2

Negative values dim the image while positive values brighten the image.

Contrast

contrast1 contrast2

Negative values decrease the difference between pixels, while positive values will increase the difference.

Invert

invert1 invert2

This operation has no additional settings.

Thresholding Filters- transform each pixel according to whether it is below or above a certain value.

Threshold

threshold1 threshold2

Type is a drop down menu allowing the selection of the different types of thresholding. Threshold is the value between 0 and 255. Look at the Filtered pane to see the effect.

Otsu’s Method

otsusmethod1 otsusmethod2

Otsu’s method produces black and white images using automatically detected settings. There are no additional user settings for this filter.

Smoothness Filters- affect the rate of change of pixels across an image

Median Blur

medianblur1 medianblur2

Window size is the number of neighboring pixels considered to calculate the median value. Number of times allows the filter to run multiple times.

Gaussian Blur

gaussian1 gaussian2

Bilateral Filter

bilateral1 bilateral2

Sharpen

sharpen1 sharpen2

Morphological Operations- morphs the shapes of bright pixels. Works best on binary images.

Dilate

dilate1 dilate2

Erode

erode1 erode2

Open

open1 open2

Close

close1 close2

As many filters as desired can be added. Once a filter is created it can be turned on and off by the check mark. Once the desired results are obtained in the Filtered pane and the Polygons pane, click Extract to extract the polygons.

Clear Polygons

clearpolygons

Clears the polygons of the selected layer in the Layers window. The following confirmation window will appear:

clearpolygonswindow

Training Images

trainingimagesbutton

This will open the Manage Training Images window that can also be opened from the Extract Polygons dialog. Make adjustments to the training set as needed and click close.

mangaetrainingwindow

Neural Networks

neuralnetworksbutton

This will open the Manage Neural Networks window.

manageneuralnetworks

The user will need to click the green + sign to add a new neural network. The Create Neural Network window will appear:

createneuralnetworks

Click the “Neural Networks” button to open the Mange Neural Networks dialog. Click “Add Network”.

Set the “Via diameter” to the same value that was used in creating the image set. Click the add button next to “Training Sets” to add the training set that was created. On the right side of the window, cycle through every via in each image of the training set. Adjust the “Downsample” and “Patch size” parameters:

Downsample is the number of times the image on the right will be zoomed out. Choose the zoom level that makes each via as easily recognizable as possible.

Patch size is the size of the image on the right. Make the patch size as small as possible (so that the neural network runs as quickly as possible), without making the patch size so small that the vias are no longer easily recognizable.

Click “Create”.

The network is created, but now it needs to be trained. Select the network and then click the “Start training” button. The neural network will be trained in the background.

runneuralnetwork

When the network is finished training, click the “View samples” button to visually inspect the results. The following columns are present for each entry:

Sample - A unique number for each entry in the table Type - “Training” if the sample was in the training set, and “Testing” if the sample was in the testing set.

Confidence - A percentage, from 0 to 100, the describes how confident the network is that it labeled the sample correctly.

Correct - This is true if “Network label” matches “User label”.

Network label - The label (“via” or “non-via”) that the neural network chose for this sample.

User label - The label (“via” or “non-via”) that the user specified for this sample in the training image.

Click “Close” to close the dialog.

Extract Memory

extractmemory

Selecting Extract Memory pulls up the extract memory window:

extractmemorywindow

For procedures on ROM extraction see Extracting Memory from Images

Export Bits

exportbits

Select to export a .csv file containing the bits in the ROM.

Image Settings

imagesettings

imagesettingswindow

The Original Image Format denotes the extension for the original tiles set. This will convert images from .png, .jpg, .bmp to .png, .bmp, .jpg. The .png format is recommended to reduce the size of the dataset.

The Generated Image Format denotes the extension for images generated by Pix2Net. These are the images stored in the warped and mulitscale directories. The recommended format is .jpg