Work Details :
First week (May 9 to May15) :
Getting a good grasp on python, numpy, the different libraries being used in ISTARE
– Focus and detailed study of background subtraction, human detection, connected component analysis
Started trial.py – my own code for working and getting grasp of the libraries
Using colour space edge and colour space dt
Saving video as images
Detailed study of segmentation and the algorithm used; getting familiar with the code
Knowing superpixels and supervoxels; and their code
Second Week (May 16 to May 22) :
Detailed study of segmentation and the algorithm used; getting familiar with the code
Playing with masking videos, segmenting videos
Segment and save the masked videos as frames
Saving the superpixels into files – uff!! this was hard work!!
Convert the whole string of frames into video, with white background
Using the colour space edge of some frame that involves movement, and then superimpose the segmented image of that frame on it. Deviced two methods – none that good!!
Discussion with David and Julian – HOG detector
Third Week (May 23 – May 29) :
Reading the paper – Object Detection via Boundary Structure Segmentation – Ben Taskar;
Reading Hough Transform – discarded atm
Understanding the code of BoSS and creating own masks for runnning; installing matlab
The Term Paper on forget to remember video code videos artista shakira
Hi everyone! Was home made videos gay protruding eyeball video . record streamed video silly face video clips . public domain strip videos video storage free sex . videos personales de mujeres desnudes daughters uncensored videos , london bridge music video by fergie fast cars video . amazon ca bond goldeneye video captian jack schmi video . pantyhose under water video trailers videos on elderly ...
Earlier masks created didn’t work because they were being converted into rgb (from grayscale) when they were cropped. So, changed them back to format ‘L’ using terminal.
Creating own models/masks; running them on image
Friday – discussion on Taskar’s paper
Developed chordiogram for circle, equilateral triangle and isoceles triangles (over the weekend)
Discussion with David and Caiming – Normalised cut
Fourth week(May 30 – June 5) :
Developed code for using more than one models (one circle and one rectangle) on an image, and then combining the result depending on some distance constraints. (Yippie!! that was interesting!!)
Extending the code for more than one models (circle – head, rectangle – body, left arm and right arm)
Read some papers on shape matching and segmentation
Change the code for composite_BoSS.m
Develop the code for taking individual results and clubbing together the different parts
Started cst_seg.py – code that takes the time derivative of video and from a particular frame, take the segementation and the time differentiation and get only those superpixels that have a common boundary with the time differentiated frame
Rigorous discussion with Nilesh Mishra on compilers, interpreters, assemblers, machine language, high level language and bytecode
Modified demo_BoSS for working only on those superpixels that have a common boundary with moving superpixels of that frame
Fifth week(June 6 – June 12):
Monday :
Further modified demo_BoSS for working only on those superpixels that have a common boundary with moving superpixels of that frame
Ran this modified demo code on some five frames and got results – not good at all!!
Tuesday + Wednesday :
Combined the segmentation method of Pedro Felzenszwalb with the BoSS code (Taskar et al.) and the Ncut code (Timothee Cour et al.) —This (still) is really really tough!!
Modified the segmentation code by Pedro; no random colourization occurs now…mathematically formed!!
The Essay on Dress Code 4
“These clothes aren’t in style! Those close are too expensive. This is not appropriate for a work place.” An institution with dress code is more appropriate and sophisticated then without; no more headaches. Imagine going through the hassle of trying to figure out what to wear every day, dress code would minimize or even vanish those time consuming worries. People wouldn’t ...
Reverted back to random coloration – previous one didnot work; Thus changed the process for grouping
Wrote rgb2idx(rgbimg) to do the same
Changed Dr. Corso’s segment.m, preprocess_img and demo_BoSS to incorporate the binary mask (cst mask)
To Remember :
discretisation.m (in boss code) renamed to discretisation_boss.m to avoid conflict with the discretisation.m of Ncut(original).
In Dr. Corso’s segment.m, rgb2ind has been replaced by rgb2idx() written by me
Thursday :
Modified rgb2idx(rgbimg) – better time complexity
Again changed the code for grouping non-moving pixels; now does by moving through the image and collecting the different instances of colour
Wrote the code do everything automatically – wriiten in hdmsi.m
Wrote fr2str.m – to convert frame number to string compatible with the naming;
Added code to boss.m (a modified extended version of demo_boss.m) that would find bounding box using the binary top detection
Wrote draw_on_image.m to draw the bounding box on the frame using top left and the bottom right points from above
Friday and Saturday :
Changed the way post processing is done (in preprocess.m (boss code)) – now works with the affinity matrix and the eigen vectors sent back from segment.m –No double segmentation!!
Changed the way affinity matrix and eigen vectors are computed (segment.m) – earlier computed using the class matrix, now computes using the “moving parts” of the original image — independent of the segment colour and the class formation…as original image is the same!!
Made the whole code to work with the results returned from the segmentation code…made way to compute seg_bnds accordingly so that matching works
Sixth week(June 13 – June 19) :
Monday and Tuesday(part) :
Made a single line change to BoSS_code/util/sample_cntr2.m — to prevent indexOutOfBound error, doesnot affect earlier results…
Figured out how to make BoSS code work with other segmentation result…Modified the code accordingly…(This was difficult)
The Term Paper on I Introduction part 1 2
I. Introduction 1.The history of school uniforms: a. the purpose of a school-uniform policy; b. main aims of the policy; c. price benefits. II. Implementation of the policy. 1.Clothing is an expression of ones individuality: a.public opinion about clothing; b. individuality, according to John Dewey. 2. Dress codes or uniform policies: a. difference; b. legislative and gender side of the problem; ...
Tuesday(part), Wednesday and Thursday(part) :
Started out on the convolution process…wrote the code accordingly…
First tried on single frame of a video….then generalised it…
Wrote conv, conv2, conv3.py – for running and testing convolution results…
Got new idea on how to check the natching criteria (earlier it was simple sum…now it is more mathematical)…modified in conv3 – have to see whether this is better or not…
Seventh week(June 20 – June 26) :
Tuesday :
Figured out how to store the data from the convolution results, so that it is usable from Ran’s code…
Modified Ran’s code for finding the True positive percentage…made to work with the result from convolution results…
Saved the different result sets ( in convolution part)…
Have to make it work with the BoSS result…
box_conv_cst.txt has results with sum of p; box_conv.txt has that of fraction…
Wednesday :
Running the convolution and BoSS codes on different parameters – Focus on convolution
For Convolution – Sum/Fraction Walk1/Walk2/Stand1 ds=4/ds=1
For BoSS – Num Class/Original Image Class
Working on 4 videos (that have humans!), from the 15 video subset
Observations : Over the threshold range (0.3,0.6)
Stand1 > Walk1 > Walk2 ; Sum > Frac ; Results get worse with increasing downsampling factor (Results with ds=2 are better than default ds=4)
Currently working with ds = 4, so, results will improve with the original video
Tried convolution on binary background subtracted videos – similar results to that of cst videos…
Thursday :
Wrote a code improvement for getting the list of colours in non-zero region of a mask – change can be found in doublesegfr.py, doubleseg.py, doubleseg_conv.py
Wrote python code for double removal of “outer/non-moving” segments – doubleseg_conv.py – Earlier, only the non-moving parts, i.e. those which didnot have a common boundary with the cst frame image were set to background. Now, using the fact that the actual moving parts are usually covered with other segments which are in contact with this single non-moving part, remove all those which are directly in contact with the non-moving part. This gives only the moving parts, but also removes the some inner parts…May give good results even with BoSS segments
The Term Paper on Peak Oil and Population
The first of these is the depletion of resources. The Earth can only produce a limited amount of water and food, which is falling short of the current needs. Most of the environmental damage being seen in the last fifty odd years is because of the growing number of people on the planet. They are cutting down forests, hunting wildlife in a reckless manner, causing pollution and creating a host of ...
Friday : (AWESOME!)
Wrote code for finding whole silhouette from a part of it – carve_sil.py (2 codes – one for video, other for frame)
Wrote draw_on_any_video.py – Draws bounding boxes on a video using data from a text file
Wrote conv_vid_tot.py – Earlier in convolution(conv2,conv3), bounding box in each frame was made from the obtained peak(s) directly. I changed it and now anything above (say) 50% of maximum peak is under consideration. Now, from each box, the positive(moving) part under it is taken, and that silhoeutte is obtained, only if the positive part is not included in full silhouettes discovered from previous boxes. This gives non-repetetive (sometimes body parts are discovered as different objects because they are not found while extending the body) and multiple silhouettes(if they are not too close(other not discovered while extension), otherwise they are grouped as one)
Made a one line change to ../BoSS_code/BoSS_main/mms_discretization_v3 – line 15 – for s = 1:min(length(ii),length(fg_all)) – to remove indexOutOfBound Error
TRY :
Instead of discarding already discovered part of silhouette, do something and determine whether it is useful or not.
TRY :
1. If some part of a silhouette is discovered, try something like extending that part to get the whole silhouette. Do something like checking in a 5×5 region around a “true” pixel, whether there are any other true pixels, and activate them
TRY : In convolution,
instead of convolving the cst image directly, take the segments which have a common boundary with the ‘moving segments’ – colour others as some same colour – then take only those segments which do not have a common boundary with this “commmon” colour…convert this result to a binary image – then convolve – essentially can be thought of as running convolution on background segmentation result…(can try that too!)
The Term Paper on Peugot 206 Electric Cars Ad Campaign Ad Strategy part 1
peugot 206 electric cars - ad campaign - ad strategy Prior to developing an advertising strategy, preparing a creative brief, and preparing a costed media schedule I would like to comment a little on the history of Peugeot so that the customers as well as readers become aware of what this company is about. Peugeot is one of the oldest companies involved in the automotive industry today. Having ...
Do with help of code from /home/srijan/hdmsi.py or the saved results from seg_cst in each folder…
Not just the convolution part…even the part in BoSS where “outer” segments are
TODO:
Convolve primitive and cropped masks on frames of videos, get peaks, define functions for getting these peaks (how much they match || how much they differ || … different methods are possible…take product of those or get some function)
Get BoSS working correctly…maybe use affinity matrix of original or segmented image and then get the eigen vectors…
OR Change grouping function of BoSS
Detect the silhouette – just the human silhouette within the bounding box is enough…
For convolution –
take primitive models as well as cropped ones…then convolve on frames — will get peaks depending on how much it matches…get results from these peaks…
There are two methods for convolution –
Matlab and python – both have convolve functions
python convolves by flipping over…twice…
decide which one (matlab or python) is better and go on…
Use carve_sil.py extensively –
In a frame, convolute on each frame. In each frame, peaks/near-peaks are obtained.
From here there are two possible methods –
1.a.Check how much each bounding box of peak overlap. If they are above a certain %, then they are different objects altogether, otherwise they are same.
b.Now for each of these remaining bounding box, carve it – This is the set of possible bounding boxes.
2.a.For each of these peaks/near-peaks, get bounding boxes.
b.Then ‘carve’ those bounding boxes (Use current bounding box to get a part of silhouette from the cst image and then get whole silhouette, in binary form. This is the bounding box obtained).
The Term Paper on Big Box Retailers People Town Retailer
America has always been a country where freedom has been treasured. Freedom is the most basic, valued principle that America was founded on. Whenever a threat looms, it is the cry and demand for freedom that pulls at the heartstrings of all Americans and moves them to action. Any threat to freedom is, in essence, a threat to America. This is usually interpreted as only a military threat, but there ...
c.Remove all ‘almost’ similar instances of boxes from the set of bounding boxes. This will give the set of all ‘different’ bounding boxes.
Use footfall in binary form/footfall as bounding box to get a part of silhouette, then extend that part to get the whole silhoeuette
Extend convolution to get more than one box in a frame –