# Preprocessing¶

## Blob¶

class blob.Blob(centroid, contour, area, bounding_box_in_frame_coordinates, bounding_box_image=None, bounding_box_images_path=None, estimated_body_length=None, pixels=None, number_of_animals=None, frame_number=None, frame_number_in_video_path=None, in_frame_index=None, pixels_path=None, video_height=None, video_width=None, video_path=None, pixels_are_from_eroded_blob=False, resolution_reduction=1.0)[source]

Object representing a blob (collection of pixels) segmented from a frame

Attributes:
frame_number : int

Object containing all the parameters of the video.

in_frame_index : int

Hierarchy of the blob in the segmentation of the frame

number_of_animals : int

Number of animals to be tracked

centroid : tuple

Centroid as (x,y) in frame coordinate

contour: list

List of tuples (x,y) in frame coordinate of the pixels on the boundary of the blob

area : int

Number of pixels composing the blob

bounding_box_in_frame_coordinates: list

List of tuples [(x, y), (x + width, y + height)] of the bounding box rectangle enclosing the blob’s pixels

bounding_box_image: ndarray

Image obtained by clipping the frame to the bounding box

estimated_body_length: float

Body length estimated from the bounding box

image_for_identification_path: string

Path to where the image for identification is saved

pixels : list

List of ravelled pixels belonging to the blob (wrt the full frame)

_is_an_individual : bool

If True the blob is associated to a single animal

_is_a_crossing : bool

If True the blob is associated to two or more animals touching

_was_a_crossing : bool

If True the blob has been generated by splitting a crossing in postprocessing

_is_a_misclassified_individual : bool

This property can be modified only by the user during validation. It identifies a blob that was by mistaken associated to a crossing by the DeepCrossingDetector

next : list

List of blob objects segmented in self.frame_number + 1 whose list of pixels intersect self.pixels

previous : list

List of blob objects segmented in self.frame_number - 1 whose list of pixels intersect self.pixels

_fragment_identifier : int

Unique integer identifying the fragment (built by blob overlapping) to which self belongs to

_blob_index : int

Hierarchy of the blob at the beggining of the core of the global fragment. Only used to plot accumulation steps

_used_for_training : bool

If True the image obtained from the blob has been used to train the idCNN

_accumulation_step : int

Accumulation step in which the image associated to the blob has been accumulated

_generated_while_closing_the_gap : bool

If True the blob has been generated while solving the crossings

_user_generated_identities : tuple

The identities corrected during validation

_identities_corrected_closing_gaps : list

The identity given to the blob during in postprocessing

_identity_corrected_solving_jumps : int

The identity given to the blob while solving duplications

_identity : int

Identity associated to the blob

is_identified : bool

True if self.identity is not None

final_identities : list
return: Return a list of the final centroids.
assigned_identities : list

Identity assigned to self by the algorithm (ignoring eventual correction made by the user during validation)

has_ambiguous_identity: bool

True if during either accumulation of residual identification the blob has been associated with equal probability to two (or more) distinct identities

nose_coordinates : tuple

Coordinate of the nose of the blob (only for zebrafish)

Coordinate of the centroid of the head of the blob (only for zebrafish)

extreme1_coordinate : tuple
extreme2_coordinates : tuple
_resolution_reduction: float

Methods

 add_centroid(video, centroid, identity[, …]) Adds a centroid with a given identity, identity. apply_model_area(video, number_of_animals, …) Classify self as a crossing or individual blob according to its area check_for_multiple_next_or_previous([direction]) Return True if self has multiple blobs in its past or future overlapping history of the blob delete_centroid(video, identity, centroid, …) Remove the centroid and the identity from the blob if it exist. distance_from_countour_to(point) Returns the distance between the point passed as input and the closes point belonging to the contour of the blob. draw(image[, colors_lst, selected_id, …]) Draw the blob representation in an image :param numpy.array image: Image where the blob should be draw. get_image_for_identification(video[, …]) Compute the image that will be used to identify the animal with the idCNN in_a_global_fragment_core(blobs_in_frame) A blob in a frame is in the core of a global fragment if in that frame there are as many blobs as number of animals to track is_a_sure_crossing() A blob marked as a sure crossing will be used to train the Deep Crossing Detector (the artificial neural network used to disriminate images associated with individual from ones associated to crossings). is_a_sure_individual() A blob marked as a sure individual will be used to train the Deep Crossing Detector (the artificial neural network used to disriminate images associated with individual from ones associated to crossings). now_points_to(other) Given two consecutive blob objects updates their respective overlapping histories overlaps_with(other) Given a second blob object, checks if the lists of pixels of the two blobs intersect propagate_identity(old_identity, …) Propagates the identity specified by new_blob_identity. set_image_for_identification(video) Set the image that will be used to identitfy the animal with the idCNN squared_distance_to(other) Returns the squared distance from the centroid of self to the centroid of other update_centroid(video, old_centroid, …) Updates the coordinates of the centrod update_identity(old_id, new_id, centroid) Updates identity.
 compute_overlapping_with_previous_blob removable_identity
add_centroid(video, centroid, identity, apply_resolution_reduction=True)[source]

Adds a centroid with a given identity, identity.

Parameters: centroid : tuple centroid to be added in full resolution coordinates. len(centroid) must be 2 identity : int identity of the centroid. identity must be > 0 and <= number_of_animals
apply_model_area(video, number_of_animals, model_area, identification_image_size, number_of_blobs)[source]

Classify self as a crossing or individual blob according to its area

Parameters: video :
check_for_multiple_next_or_previous(direction=None)[source]

Return True if self has multiple blobs in its past or future overlapping history of the blob

Parameters: direction : str “previous” or “next”. If “previous” the past overlapping history will be checked in order to find out if the blob will split in the past. Symmetrically, if “next” the future overlapping history of the blob will be checked Bool If True the blob splits into two or multiple overlapping blobs in its “past” or “future” history, depending on the parameter “direction”
delete_centroid(video, identity, centroid, blobs_in_frame, apply_resolution_reduction=True)[source]

Remove the centroid and the identity from the blob if it exist.

Parameters: centroid : tuple centroid to be removed in full frame coordinates, without the application of the resolution reduction
distance_from_countour_to(point)[source]

Returns the distance between the point passed as input and the closes point belonging to the contour of the blob.

Parameters: point : tuple (x,y) float $$\min_{c\in \mbox{ blob.contour}}(d(c, point))$$, where $$d$$ is the Euclidean distance.
draw(image, colors_lst=None, selected_id=None, is_selected=False)[source]

Draw the blob representation in an image :param numpy.array image: Image where the blob should be draw. :param str selected_id: Identity of the selected blob. :param colors_lst: List of colors used to draw the blobs.

final_centroids
Returns: Return a list of the final centroids.
final_centroids_full_resolution
Returns: Return a list of the final centroids in the full resolution of the frame
final_identities
Returns: Return a list of the final centroids.
get_image_for_identification(video, folder_to_save_for_paper_figure='', image_size=None)[source]

Compute the image that will be used to identify the animal with the idCNN

Parameters: video :
in_a_global_fragment_core(blobs_in_frame)[source]

A blob in a frame is in the core of a global fragment if in that frame there are as many blobs as number of animals to track

Parameters: blobs_in_frame : list List of Blob objects representing the animals segmented in the frame self.frame_number Bool True if the blob is in the core of a global fragment
is_a_sure_crossing()[source]

A blob marked as a sure crossing will be used to train the Deep Crossing Detector (the artificial neural network used to disriminate images associated with individual from ones associated to crossings).

Returns: Bool Blob is a sure crossing if: it overlaps with one and only one blob in both the immediate past and future frames; it splits in both its past and future overlapping history
is_a_sure_individual()[source]

A blob marked as a sure individual will be used to train the Deep Crossing Detector (the artificial neural network used to disriminate images associated with individual from ones associated to crossings).

Returns: Bool Blob is a sure individual if: it overlaps with one and only one blob in both the immediate past and future frames; it never splits in both its past and future overlapping history
now_points_to(other)[source]

Given two consecutive blob objects updates their respective overlapping histories

Parameters: other : An instance of the class Blob
overlaps_with(other)[source]

Given a second blob object, checks if the lists of pixels of the two blobs intersect

Parameters: other : An instance of the class Blob Bool True if the lists of pixels have non-empty intersection
propagate_identity(old_identity, new_blob_identity, centroid)[source]

Propagates the identity specified by new_blob_identity.

set_image_for_identification(video)[source]

Set the image that will be used to identitfy the animal with the idCNN

Parameters: video :
squared_distance_to(other)[source]

Returns the squared distance from the centroid of self to the centroid of other

Parameters: other : or tuple An instance of the class Blob or a tuple (x,y) float Squared distance between centroids
update_centroid(video, old_centroid, new_centroid, identity)[source]

Updates the coordinates of the centrod

Parameters: old_centroid : tuple len(centroid) must be 2 new_centroid : tuple len(new_centroid) must be 2
update_identity(old_id, new_id, centroid)[source]

Updates identity. If the blob has multiple identities already assigned the old_id to be modified must be specified.

Parameters: new_id : int new value for the identity of the blob old_id : int old value of the identity of the blob. It must be specified when the blob has multiple identities already assigned.
blob.full2miniframe(point, boundingBox)[source]

Maps a point in the fullframe to the coordinate system defined by the image generated by considering the bounding box of the blob. Here it is use for centroids

Parameters: point : tuple (x, y) boundingBox : list [(x, y), (x + bounding_box_width, y + bounding_box_height)] tuple $$(x^\prime, y^\prime)$$
blob.remove_background_pixels(height, width, bounding_box_image, pixels, bounding_box_in_frame_coordinates, folder_to_save_for_paper_figure)[source]

Removes the background pixels substiuting them with a homogeneous black background.

Parameters: height : int Frame height width : int Frame width bounding_box_image : ndarray Images cropped from the frame by considering the bounding box associated to a blob pixels : list List of pixels associated to a blob bounding_box_in_frame_coordinates : list [(x, y), (x + bounding_box_width, y + bounding_box_height)] identification_image_size : tuple shape of the identification image folder_to_save_for_paper_figure : str folder to save the images for identification ndarray Image with black background pixels

## List of Blobs¶

Collection of instances of the class Blob generated by considering all the blobs segmented from the video.

## Model area¶

Allows to apply a model of the area of the indiviudals to be tracked to all the blobs collected during the segmentation process (see segmentation)

class model_area.ModelArea(mean, median, std)[source]

Model of the area used to perform a first discrimination between blobs representing single individual and multiple touching animals (crossings)

Attributes: median : float median of the area of the blobs segmented from portions of the video in which all the animals are visible (not touching) mean : float mean of the area of the blobs segmented from portions of the video in which all the animals are visible (not touching) std : float standard deviation of the area of the blobs segmented from portions of the video in which all the animals are visible (not touching) std_tolerance : int tolerance factor

Methods

 __call__(area[, std_tolerance]) Call self as a function.