Work Details :
First week (May 9 to May15) :
Getting a good grasp on python, numpy, the different libraries being used in ISTARE
– Focus and detailed study of background subtraction, human detection, connected component analysis
Started trial.py – my own code for working and getting grasp of the libraries
Using colour space edge and colour space dt
Saving video as images
Detailed study of segmentation and the algorithm used; getting familiar with the code
Knowing superpixels and supervoxels; and their code
Second Week (May 16 to May 22) :
Detailed study of segmentation and the algorithm used; getting familiar with the code
Playing with masking videos, segmenting videos
Segment and save the masked videos as frames
Saving the superpixels into files – uff!! this was hard work!!
Convert the whole string of frames into video, with white background
Using the colour space edge of some frame that involves movement, and then superimpose the segmented image of that frame on it. Deviced two methods – none that good!!
Discussion with David and Julian – HOG detector
Third Week (May 23 – May 29) :
Reading the paper – Object Detection via Boundary Structure Segmentation – Ben Taskar;
Reading Hough Transform – discarded atm
... go to school comfortable at least once throughout their week, Friday and Saturday are days Heald is less ... to school from Monday to Thursday for 10 weeks don’t get a day to wear a ... what students need especially if we have a long week of learning. Most students at Heald are attending ... changing rules and punishing those who follow the dress code. Heald College requires us to wear professional attire ...
Understanding the code of BoSS and creating own masks for runnning; installing matlab
Earlier masks created didn’t work because they were being converted into rgb (from grayscale) when they were cropped. So, changed them back to format ‘L’ using terminal.
Creating own models/masks; running them on image
Friday – discussion on Taskar’s paper
Developed chordiogram for circle, equilateral triangle and isoceles triangles (over the weekend)
Discussion with David and Caiming – Normalised cut
Fourth week(May 30 – June 5) :
Developed code for using more than one models (one circle and one rectangle) on an image, and then combining the result depending on some distance constraints. (Yippie!! that was interesting!!)
Extending the code for more than one models (circle – head, rectangle – body, left arm and right arm)
Read some papers on shape matching and segmentation
Change the code for composite_BoSS.m
Develop the code for taking individual results and clubbing together the different parts
Started cst_seg.py – code that takes the time derivative of video and from a particular frame, take the segementation and the time differentiation and get only those superpixels that have a common boundary with the time differentiated frame
Rigorous discussion with Nilesh Mishra on compilers, interpreters, assemblers, machine language, high level language and bytecode
Modified demo_BoSS for working only on those superpixels that have a common boundary with moving superpixels of that frame
Fifth week(June 6 – June 12):
Further modified demo_BoSS for working only on those superpixels that have a common boundary with moving superpixels of that frame
Ran this modified demo code on some five frames and got results – not good at all!!
Tuesday + Wednesday :
Combined the segmentation method of Pedro Felzenszwalb with the BoSS code (Taskar et al.) and the Ncut code (Timothee Cour et al.) —This (still) is really really tough!!
Modified the segmentation code by Pedro; no random colourization occurs now…mathematically formed!!
... name calling, snickering, or fingers pointing with a dress code enforced. Dress code may unknowingly promote a safer environment if there are ... on the company or institution; it proposes a business-like image. People should always look sharp and groomed at work; it ... to a person by their personality. In conclusion, dress code can prevent challenging situations and arise greater opportunities. Though a ...
Reverted back to random coloration – previous one didnot work; Thus changed the process for grouping
Wrote rgb2idx(rgbimg) to do the same
Changed Dr. Corso’s segment.m, preprocess_img and demo_BoSS to incorporate the binary mask (cst mask)
To Remember :
discretisation.m (in boss code) renamed to discretisation_boss.m to avoid conflict with the discretisation.m of Ncut(original).
In Dr. Corso’s segment.m, rgb2ind has been replaced by rgb2idx() written by me
Modified rgb2idx(rgbimg) – better time complexity
Again changed the code for grouping non-moving pixels; now does by moving through the image and collecting the different instances of colour
Wrote the code do everything automatically – wriiten in hdmsi.m
Wrote fr2str.m – to convert frame number to string compatible with the naming;
Added code to boss.m (a modified extended version of demo_boss.m) that would find bounding box using the binary top detection
Wrote draw_on_image.m to draw the bounding box on the frame using top left and the bottom right points from above
Friday and Saturday :
Changed the way post processing is done (in preprocess.m (boss code)) – now works with the affinity matrix and the eigen vectors sent back from segment.m –No double segmentation!!
Changed the way affinity matrix and eigen vectors are computed (segment.m) – earlier computed using the class matrix, now computes using the “moving parts” of the original image — independent of the segment colour and the class formation…as original image is the same!!
Made the whole code to work with the results returned from the segmentation code…made way to compute seg_bnds accordingly so that matching works
Sixth week(June 13 – June 19) :
Monday and Tuesday(part) :
Made a single line change to BoSS_code/util/sample_cntr2.m — to prevent indexOutOfBound error, doesnot affect earlier results…
Figured out how to make BoSS code work with other segmentation result…Modified the code accordingly…(This was difficult)
... over clothing at school. Having examined the positive academic-related results that were reached in the private school sector, it was ... speaking, school-uniform policies are easier to maintain than dress codes because todays uniforms can suit the times; though some ... of successful programs. 3. School-uniform program is a part of the problem. Abstract Societies reproduce themselves in only two ...
Tuesday(part), Wednesday and Thursday(part) :
Started out on the convolution process…wrote the code accordingly…
First tried on single frame of a video….then generalised it…
Wrote conv, conv2, conv3.py – for running and testing convolution results…
Got new idea on how to check the natching criteria (earlier it was simple sum…now it is more mathematical)…modified in conv3 – have to see whether this is better or not…
Seventh week(June 20 – June 26) :
Figured out how to store the data from the convolution results, so that it is usable from Ran’s code…
Modified Ran’s code for finding the True positive percentage…made to work with the result from convolution results…
Saved the different result sets ( in convolution part)…
Have to make it work with the BoSS result…
box_conv_cst.txt has results with sum of p; box_conv.txt has that of fraction…
Running the convolution and BoSS codes on different parameters – Focus on convolution
For Convolution – Sum/Fraction Walk1/Walk2/Stand1 ds=4/ds=1
For BoSS – Num Class/Original Image Class
Working on 4 videos (that have humans!), from the 15 video subset
Observations : Over the threshold range (0.3,0.6)
Stand1 > Walk1 > Walk2 ; Sum > Frac ; Results get worse with increasing downsampling factor (Results with ds=2 are better than default ds=4)
Currently working with ds = 4, so, results will improve with the original video
Tried convolution on binary background subtracted videos – similar results to that of cst videos…
Wrote a code improvement for getting the list of colours in non-zero region of a mask – change can be found in doublesegfr.py, doubleseg.py, doubleseg_conv.py
Wrote python code for double removal of “outer/non-moving” segments – doubleseg_conv.py – Earlier, only the non-moving parts, i.e. those which didnot have a common boundary with the cst frame image were set to background. Now, using the fact that the actual moving parts are usually covered with other segments which are in contact with this single non-moving part, remove all those which are directly in contact with the non-moving part. This gives only the moving parts, but also removes the some inner parts…May give good results even with BoSS segments
... this to the developer, and apply the resulting city income to mitigating these losses by purchasing ... housing. That is severely mistaken priorities on the part of our non-representatives. Cities put limits ... . S. production, and his prediction for World peak production was around 2006. There is ample disagreement ... surrounding is more valuable than one that is boxed in between high-rise buildings. In my ...
Friday : (AWESOME!)
Wrote code for finding whole silhouette from a part of it – carve_sil.py (2 codes – one for video, other for frame)
Wrote draw_on_any_video.py – Draws bounding boxes on a video using data from a text file
Wrote conv_vid_tot.py – Earlier in convolution(conv2,conv3), bounding box in each frame was made from the obtained peak(s) directly. I changed it and now anything above (say) 50% of maximum peak is under consideration. Now, from each box, the positive(moving) part under it is taken, and that silhoeutte is obtained, only if the positive part is not included in full silhouettes discovered from previous boxes. This gives non-repetetive (sometimes body parts are discovered as different objects because they are not found while extending the body) and multiple silhouettes(if they are not too close(other not discovered while extension), otherwise they are grouped as one)
Made a one line change to ../BoSS_code/BoSS_main/mms_discretization_v3 – line 15 – for s = 1:min(length(ii),length(fg_all)) – to remove indexOutOfBound Error
Instead of discarding already discovered part of silhouette, do something and determine whether it is useful or not.
1. If some part of a silhouette is discovered, try something like extending that part to get the whole silhouette. Do something like checking in a 5×5 region around a “true” pixel, whether there are any other true pixels, and activate them
TRY : In convolution,
instead of convolving the cst image directly, take the segments which have a common boundary with the ‘moving segments’ – colour others as some same colour – then take only those segments which do not have a common boundary with this “commmon” colour…convert this result to a binary image – then convolve – essentially can be thought of as running convolution on background segmentation result…(can try that too!)
... make this best viewed through the political frame? Could you have been part of this case study? Why or ... Interpretations of Organisational Processes Process Structural Frame Human Resource Frame Political Frame Symbolic Frame Strategic planning Strategies to set objectives ... facing any organisation.A structure is more than boxes and lines arranged hierarchically on an official organisational chart ...
Do with help of code from /home/srijan/hdmsi.py or the saved results from seg_cst in each folder…
Not just the convolution part…even the part in BoSS where “outer” segments are
Convolve primitive and cropped masks on frames of videos, get peaks, define functions for getting these peaks (how much they match || how much they differ || … different methods are possible…take product of those or get some function)
Get BoSS working correctly…maybe use affinity matrix of original or segmented image and then get the eigen vectors…
OR Change grouping function of BoSS
Detect the silhouette – just the human silhouette within the bounding box is enough…
For convolution –
take primitive models as well as cropped ones…then convolve on frames — will get peaks depending on how much it matches…get results from these peaks…
There are two methods for convolution –
Matlab and python – both have convolve functions
python convolves by flipping over…twice…
decide which one (matlab or python) is better and go on…
Use carve_sil.py extensively –
In a frame, convolute on each frame. In each frame, peaks/near-peaks are obtained.
From here there are two possible methods –
1.a.Check how much each bounding box of peak overlap. If they are above a certain %, then they are different objects altogether, otherwise they are same.
b.Now for each of these remaining bounding box, carve it – This is the set of possible bounding boxes.
2.a.For each of these peaks/near-peaks, get bounding boxes.
b.Then ‘carve’ those bounding boxes (Use current bounding box to get a part of silhouette from the cst image and then get whole silhouette, in binary form. This is the bounding box obtained).
... Despite many of the protests, big box retailers remain an integral part of the American economy, both within ... for high school and college students to work part time while they " re focusing on their ... shown that enough pollution is caused by big box retailers to warrant some concern (Erlenmacher 10). ... opportunity for all. Enter the world of big box retailers. These companies are the biggest and most ...
c.Remove all ‘almost’ similar instances of boxes from the set of bounding boxes. This will give the set of all ‘different’ bounding boxes.
Use footfall in binary form/footfall as bounding box to get a part of silhouette, then extend that part to get the whole silhoeuette
Extend convolution to get more than one box in a frame –