UPDATE, code for OpenCV3 + Android Studio is on GitHub
This tutorial is for older samples, if you are starting with the new ones (2.4.6) please look at this updated tutorial. Method and principles of matching are the same for both.
Finally I found some time to write promised tutorial of eye detection and template matching on Android. Since OpenCV for Android is getting better and better, some code snippets can be old and not the best possible way to solve the problem. If you found some improvements, please comment or contact me, I will edit this post and share it to others.
We take standard OpenCV example for face detections and extends it a little.
Android OpenCV SDK can be found here
If you arent’n familiar with Eclipse and OpenCV yet, please read basic setup of opencv4android first.
Import Face detection sample to Eclipse and clean/build the project to be sure is correctly imported and working.
As you can se on the video, there are some differences in GUI against the pure sample. There is a slider to easily change the matching method and button to recreate the eye template.
So at first we add those elements to the GUI.
- Open FdActivity.java and change content of LoaderCallbackInterface method with snippet below. Code is pretty straightforward – instead of putting instance FdView as content view, we programmatically create new layout, add button and verticalseekbar (this class is included in downloadable project at the end of page) to layout and pass the whole layout as content view.
- For testing purposes, edit SampleCvViewBase.java line 35 as shown below. This will change video source to the front camera
- Now we are ready to do the main part. Open FdView.java
// Create and set View mView = new FdView(mAppContext); mView.setDetectorType(mDetectorType); mView.setMinFaceSize(0.2f); VerticalSeekBar VerticalseekBar = new VerticalSeekBar( getApplicationContext()); VerticalseekBar.setMax(5); VerticalseekBar.setPadding(20, 20, 20, 20); RelativeLayout.LayoutParams vsek = new RelativeLayout.LayoutParams( RelativeLayout.LayoutParams.WRAP_CONTENT, 400); vsek.addRule(RelativeLayout.ALIGN_PARENT_RIGHT); // Dont forget to set the id, or aligment will not work VerticalseekBar.setId(1); VerticalseekBar .setOnSeekBarChangeListener(new OnSeekBarChangeListener() { public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) { method = progress; switch (method) { case 0: matching_method.setText("TM_SQDIFF"); break; case 1: matching_method.setText("TM_SQDIFF_NORMED"); break; case 2: matching_method.setText("TM_CCOEFF"); break; case 3: matching_method.setText("TM_CCOEFF_NORMED"); break; case 4: matching_method.setText("TM_CCORR"); break; case 5: matching_method.setText("TM_CCORR_NORMED"); break; } } public void onStartTrackingTouch(SeekBar seekBar) { } public void onStopTrackingTouch(SeekBar seekBar) { } }); matching_method = new TextView(getApplicationContext()); matching_method.setText("TM_SQDIFF"); matching_method.setTextColor(Color.YELLOW); RelativeLayout.LayoutParams matching_method_param = new RelativeLayout.LayoutParams( RelativeLayout.LayoutParams.WRAP_CONTENT, RelativeLayout.LayoutParams.WRAP_CONTENT); matching_method_param .addRule(RelativeLayout.ALIGN_PARENT_RIGHT); matching_method_param.addRule(RelativeLayout.BELOW, VerticalseekBar.getId()); Button btn = new Button(getApplicationContext()); btn.setText("Create template"); RelativeLayout.LayoutParams btnp = new RelativeLayout.LayoutParams( RelativeLayout.LayoutParams.WRAP_CONTENT, RelativeLayout.LayoutParams.WRAP_CONTENT); btnp.addRule(RelativeLayout.ALIGN_PARENT_LEFT); btn.setId(2); // Listen for click btn.setOnClickListener(new OnClickListener() { public void onClick(View v) { mView.resetLearFramesCount(); } }); RelativeLayout frameLayout = new RelativeLayout( getApplicationContext()); frameLayout.addView(mView, 0); frameLayout.addView(btn, btnp); frameLayout.addView(VerticalseekBar, vsek); frameLayout.addView(matching_method, matching_method_param); setContentView(frameLayout);
mCamera = new VideoCapture(Highgui.CV_CAP_ANDROID+1);
Add some Mats variables for zooming, templates and computations, Classificators for right and left eye
class FdView extends SampleCvViewBase { private static final String TAG = "Sample::FdView"; private Mat mRgba; private Mat mGray; // Mats for zoom private Mat mZoomCorner; private Mat mZoomWindow; private Mat mZoomWindow2; // Helper Mat private Mat mResult; // Mat for templates private Mat teplateR; private Mat teplateL; private File mCascadeFile; private CascadeClassifier mJavaDetector; // Classifiers for left-right eyes private CascadeClassifier mCascadeER; private CascadeClassifier mCascadeEL; private DetectionBasedTracker mNativeDetector; private static final Scalar FACE_RECT_COLOR = new Scalar(0, 255, 0, 255); public static final int JAVA_DETECTOR = 0; public static final int NATIVE_DETECTOR = 1; // Matching methods private static final int TM_SQDIFF = 0; private static final int TM_SQDIFF_NORMED = 1; private static final int TM_CCOEFF = 2; private static final int TM_CCOEFF_NORMED = 3; private static final int TM_CCORR = 4; private static final int TM_CCORR_NORMED = 5; private int mDetectorType = JAVA_DETECTOR; private float mRelativeFaceSize = 0; private int mAbsoluteFaceSize = 0; // counter of learning frames private int learn_frames = 0; // match value private double match_value; // rectangle used to extract eye region - ROI private Rect eyearea = new Rect();
Now we need to load cascade classifier files for left and right eye – haarcascade_lefteye_2splits.xml distributed with OpenCV package (data folder) I used the same classifier for both eyes, because for right eye, haarcascade_lefteye_2splits.xml gives me better results, than haarcascade_righteye_2splits.xml . But you can try it with both – simply rewrite the filename.
Dont forget to copy haarcascade_lefteye_2splits.xml to /raw directory in your Android project (if is not present, simply create it)
public FdView(Context context) { super(context); try { InputStream is = context.getResources().openRawResource(R.raw.lbpcascade_frontalface); File cascadeDir = context.getDir("cascade", Context.MODE_PRIVATE); mCascadeFile = new File(cascadeDir, "lbpcascade_frontalface.xml"); FileOutputStream os = new FileOutputStream(mCascadeFile); byte[] buffer = new byte[4096]; int bytesRead; while ((bytesRead = is.read(buffer)) != -1) { os.write(buffer, 0, bytesRead); } is.close(); os.close(); // --------------------------------- load left eye classificator ----------------------------------- InputStream iser = context.getResources().openRawResource(R.raw.haarcascade_lefteye_2splits); File cascadeDirER = context.getDir("cascadeER", Context.MODE_PRIVATE); File cascadeFileER = new File(cascadeDirER, "haarcascade_eye_right.xml"); FileOutputStream oser = new FileOutputStream(cascadeFileER); byte[] bufferER = new byte[4096]; int bytesReadER; while ((bytesReadER = iser.read(bufferER)) != -1) { oser.write(bufferER, 0, bytesReadER); } iser.close(); oser.close(); //---------------------------------------------------------------------------------------------------- // --------------------------------- load right eye classificator ------------------------------------ InputStream isel = context.getResources().openRawResource(R.raw.haarcascade_lefteye_2splits); File cascadeDirEL = context.getDir("cascadeEL", Context.MODE_PRIVATE); File cascadeFileEL = new File(cascadeDirEL, "haarcascade_eye_left.xml"); FileOutputStream osel = new FileOutputStream(cascadeFileEL); byte[] bufferEL = new byte[4096]; int bytesReadEL; while ((bytesReadEL = isel.read(bufferEL)) != -1) { osel.write(bufferEL, 0, bytesReadEL); } isel.close(); osel.close(); // ------------------------------------------------------------------------------------------------------ mJavaDetector = new CascadeClassifier(mCascadeFile.getAbsolutePath()); mCascadeER = new CascadeClassifier(cascadeFileER.getAbsolutePath()); mCascadeEL = new CascadeClassifier(cascadeFileER.getAbsolutePath()); if (mJavaDetector.empty()|| mCascadeER.empty() || mCascadeEL.empty()) { Log.e(TAG, "Failed to load cascade classifier"); mJavaDetector = null; mCascadeER=null; mCascadeEL=null; } else Log.i(TAG, "Loaded cascade classifier from " + mCascadeFile.getAbsolutePath()); mNativeDetector = new DetectionBasedTracker(mCascadeFile.getAbsolutePath(), 0); cascadeDir.delete(); cascadeFileER.delete(); cascadeDirER.delete(); cascadeFileEL.delete(); cascadeDirEL.delete(); } catch (IOException e) { e.printStackTrace(); Log.e(TAG, "Failed to load cascade. Exception thrown: " + e); } }
Now edit the image processing:
@Override protected Bitmap processFrame(VideoCapture capture) { ... MatOfRect faces = new MatOfRect(); if (mDetectorType == JAVA_DETECTOR) { if (mJavaDetector != null) mJavaDetector.detectMultiScale(mGray, faces, 1.1, 2, 2 // TODO: objdetect.CV_HAAR_SCALE_IMAGE , new Size(mAbsoluteFaceSize, mAbsoluteFaceSize), new Size()); // Prepare zoom mats if (mZoomCorner == null || mZoomWindow == null) CreateAuxiliaryMats(); Rect[] facesArray = faces.toArray(); // Iterate through all detected faces for (int i = 0; i < facesArray.length; i++){ Rect r = facesArray[i]; Core.rectangle(mGray, r.tl(), r.br(), new Scalar(0, 255, 0, 255), 3); // Draw rectangle around the face Core.rectangle(mRgba, r.tl(), r.br(), new Scalar(0, 255, 0, 255), 3);
Now we detect face, right, nothing new, its face detection sample:) What about eyes? As face is found, it reduces our ROI (region of interest – where we will finding eyes) to face rectangle only. From face anatomy we can exclude the bottom part of face with mouth and some top part with forehead and hair. I could be computed relatively to the face size. See picture below – original image -> detected face -> eye area -> area splitted area for right, left eye. It saves computing power.
// compute the eye area eyearea = new Rect(r.x +r.width/8,(int)(r.y + (r.height/4.5)),r.width - 2*r.width/8,(int)( r.height/3.0)); // split it Rect eyearea_right = new Rect(r.x +r.width/16,(int)(r.y + (r.height/4.5)),(r.width - 2*r.width/16)/2,(int)( r.height/3.0)); Rect eyearea_left = new Rect(r.x +r.width/16 +(r.width - 2*r.width/16)/2,(int)(r.y + (r.height/4.5)),(r.width - 2*r.width/16)/2,(int)( r.height/3.0)); // draw the area - mGray is working grayscale mat, if you want to see area in rgb preview, change mGray to mRgba Core.rectangle(mGray,eyearea_left.tl(),eyearea_left.br() , new Scalar(255,0, 0, 255), 2); Core.rectangle(mGray,eyearea_right.tl(),eyearea_right.br() , new Scalar(255, 0, 0, 255), 2);
Count 5 first frames for learning – get_template function get classifier, area to detect, and desired size of new template.
if(learn_frames<5){ teplateR = get_template(mCascadeER,eyearea_right,24); teplateL = get_template(mCascadeEL,eyearea_left,24); learn_frames++; }else{ // Learning finished, use the new templates for template matching match_value = match_eye(eyearea_right,teplateR,FdActivity.method); //Or hardcode method you needs eg TM_SQDIFF_NORMED match_value = match_eye(eyearea_left,teplateL,FdActivity.method); //Or hardcode method you needs eg TM_SQDIFF_NORMED } // cut eye areas and put them to zoom windows Imgproc.resize(mRgba.submat(eyearea_left), mZoomWindow2, mZoomWindow2.size()); Imgproc.resize(mRgba.submat(eyearea_right), mZoomWindow, mZoomWindow.size()); } ...
Zoom method:
private void CreateAuxiliaryMats() { if (mGray.empty()) return; int rows = mGray.rows(); int cols = mGray.cols(); if (mZoomWindow == null){ mZoomWindow = mRgba.submat(rows / 2 + rows / 10 ,rows , cols / 2 + cols / 10, cols ); mZoomWindow2 = mRgba.submat(0, rows / 2 - rows / 10 , cols / 2 + cols / 10, cols ); } }
Matching the template
area – region of interest
mTemplate – template of eye, created by get_template
type – type of matching method
returns matching score of template and desired area
private double match_eye(Rect area, Mat mTemplate,int type){ Point matchLoc; Mat mROI = mGray.submat(area); int result_cols = mROI.cols() - mTemplate.cols() + 1; int result_rows = mROI.rows() - mTemplate.rows() + 1; //Check for bad template size if(mTemplate.cols()==0 ||mTemplate.rows()==0){ return 0.0; } mResult = new Mat(result_cols, result_rows, CvType.CV_8U); switch (type){ case TM_SQDIFF: Imgproc.matchTemplate(mROI, mTemplate, mResult, Imgproc.TM_SQDIFF) ; break; case TM_SQDIFF_NORMED: Imgproc.matchTemplate(mROI, mTemplate, mResult, Imgproc.TM_SQDIFF_NORMED) ; break; case TM_CCOEFF: Imgproc.matchTemplate(mROI, mTemplate, mResult, Imgproc.TM_CCOEFF) ; break; case TM_CCOEFF_NORMED: Imgproc.matchTemplate(mROI, mTemplate, mResult, Imgproc.TM_CCOEFF_NORMED) ; break; case TM_CCORR: Imgproc.matchTemplate(mROI, mTemplate, mResult, Imgproc.TM_CCORR) ; break; case TM_CCORR_NORMED: Imgproc.matchTemplate(mROI, mTemplate, mResult, Imgproc.TM_CCORR_NORMED) ; break; } Core.MinMaxLocResult mmres = Core.minMaxLoc(mResult); // there is difference in matching methods - best match is max/min value if(type == TM_SQDIFF || type == TM_SQDIFF_NORMED) { matchLoc = mmres.minLoc; } else { matchLoc = mmres.maxLoc; } Point matchLoc_tx = new Point(matchLoc.x+area.x,matchLoc.y+area.y); Point matchLoc_ty = new Point(matchLoc.x + mTemplate.cols() + area.x , matchLoc.y + mTemplate.rows()+area.y ); Core.rectangle(mRgba, matchLoc_tx,matchLoc_ty, new Scalar(255, 255, 0, 255)); if(type == TM_SQDIFF || type == TM_SQDIFF_NORMED) { return mmres.maxVal; } else { return mmres.minVal; } }
On following picture you can see all ROI areas and matching in progress – yellow rectangles.
Get template – find eye in desired roi by haar classifier, if eye is found, reduce roi to the eye only and search for the darkness point – pupil. Create rectangle of desired size, centered in pupil – our new eye template.
private Mat get_template(CascadeClassifier clasificator, Rect area,int size){ Mat template = new Mat(); Mat mROI = mGray.submat(area); MatOfRect eyes = new MatOfRect(); Point iris = new Point(); Rect eye_template = new Rect(); clasificator.detectMultiScale(mROI, eyes, 1.15, 2,Objdetect.CASCADE_FIND_BIGGEST_OBJECT|Objdetect.CASCADE_SCALE_IMAGE, new Size(30,30),new Size()); Rect[] eyesArray = eyes.toArray(); for (int i = 0; i < eyesArray.length; i++){ Rect e = eyesArray[i]; e.x = area.x + e.x; e.y = area.y + e.y; Rect eye_only_rectangle = new Rect((int)e.tl().x,(int)( e.tl().y + e.height*0.4),(int)e.width,(int)(e.height*0.6)); // reduce ROI mROI = mGray.submat(eye_only_rectangle); Mat vyrez = mRgba.submat(eye_only_rectangle); // find the darkness point Core.MinMaxLocResult mmG = Core.minMaxLoc(mROI); // draw point to visualise pupil Core.circle(vyrez, mmG.minLoc,2, new Scalar(255, 255, 255, 255),2); iris.x = mmG.minLoc.x + eye_only_rectangle.x; iris.y = mmG.minLoc.y + eye_only_rectangle.y; eye_template = new Rect((int)iris.x-size/2,(int)iris.y-size/2 ,size,size); Core.rectangle(mRgba,eye_template.tl(),eye_template.br(),new Scalar(255, 0, 0, 255), 2); // copy area to template template = (mGray.submat(eye_template)).clone(); return template; } return template; }
Red rectangles are templates and white dots are pupils.
Links: → Visit Store → Search Google
Thanks to Kirill Kornyakov and Andrey Pavlenko for all the work on OpenCV
Last edit 17.2.2013 fix bad template size and update project/download link
Hi Roman Hosek,
When will you post these tutorials ?. I really need a android eye tracking tutorial for my final year project. I searched every where but couldn’t find any. Thank you very much for supporting us by providing these tutorials 🙂
Thank you,
A. Prasad De Silva
hello, can you please provide me by your email i want to ask you some questions maybe i can learn from your experience i am trying to do this project :
Media, sensors and actuators: Wake up alert for sleepy drivers
This project intend to design a program that can be put on a mobile
device that will check the drivers face expression
and alert by audio to the driver and by a text message
to a friend, two ways to alert the sleepy driver that will wake
him up or make him to pull over.
i am trying to do it on eclipse with opencv working and i noticed that you are working on android and eye detection
Thank you
Hi, contact me on romanhosek.cz@gmail.com
I downloaded the „Eye detect“ app but it show error message „Fatal error:can’t open camera!“. how can it can be correct? Also, i want to ask that can the eye detection operation checking the eye is open or close?
salam karim
brother am also doing the same project for my final year project please please reply me that i can get help from you
jazakALLAH
Hi Roman Hosek,
Thanks for the tutorial. I tried above example, but it has some serious performance issues. When I tried to run it in my mobile device (Samsung Galaxy Nexus – Android 4.2 JellyBeans), it got stuck at some point (at creating template) and gives this error (Unfortunately application stopped working) .What might be the problem ?
Thank you,
A. Prasad De Silva
What kind of error? While generating template, there is no check for case, when only one eye is detected, I planed to add it later as I‘ m currently busy at my work. If you have logcat, please send it to romanhosek.cz@gmail.com thanks.
Hi Roman Hosek,
This is the error I get when I try to open the application from emulator “ Unfortunately, Application has stopped“ after few seconds detecting eyes. The complete logcat file I sent to your email.
Logcat Errors:
02-05 02:18:44.364: E/cv::error()(1114): OpenCV Error: Assertion failed (corrsize.height <= img.rows + templ.rows – 1 && corrsize.width <= img.cols + templ.cols – 1) in void cv::crossCorr(const cv::Mat&, const cv::Mat&, cv::Mat&, cv::Size, int, cv::Point, double, int), file /home/reports/ci/slave/opencv/modules/imgproc/src/templmatch.cpp, line 70
02-05 02:18:44.392: E/AndroidRuntime(1114): FATAL EXCEPTION: Thread-94
02-05 02:18:44.392: E/AndroidRuntime(1114): CvException [org.opencv.core.CvException: /home/reports/ci/slave/opencv/modules/imgproc/src/templmatch.cpp:70: error: (-215) corrsize.height <= img.rows + templ.rows – 1 && corrsize.width <= img.cols + templ.cols – 1 in function void cv::crossCorr(const cv::Mat&, const cv::Mat&, cv::Mat&, cv::Size, int, cv::Point, double, int)
02-05 02:18:44.392: E/AndroidRuntime(1114): ]
02-05 02:18:44.392: E/AndroidRuntime(1114): at org.opencv.imgproc.Imgproc.matchTemplate_0(Native Method)
02-05 02:18:44.392: E/AndroidRuntime(1114): at org.opencv.imgproc.Imgproc.matchTemplate(Imgproc.java:7214)
02-05 02:18:44.392: E/AndroidRuntime(1114): at org.opencv.samples.facedetect.FdView.match_eye(FdView.java:299)
02-05 02:18:44.392: E/AndroidRuntime(1114): at org.opencv.samples.facedetect.FdView.processFrame(FdView.java:235)
02-05 02:18:44.392: E/AndroidRuntime(1114): at org.opencv.samples.facedetect.SampleCvViewBase.run(SampleCvViewBase.java:116)
02-05 02:18:44.392: E/AndroidRuntime(1114): at org.opencv.samples.facedetect.FdView.run(FdView.java:366)
02-05 02:18:44.392: E/AndroidRuntime(1114): at java.lang.Thread.run(Thread.java:856)
Thank you,
A. Prasad De Silva
hallo mr roman
Thank you very much for the tutorial
Anyway, i have some question about this tutorial.
May i send my question by email ?
Thanks
Hi Aris, you are welcome, of course you can.
Roman
this a good tutorial .is there a link to download the file ? are you using template matching to detect eyes then to track it ?
Hi, yes I use template matching, there are another methods, they will be in other tutorials. Download link is now on the bottom of the page.
i can’t find it :/
Hi Roman Hosek,
The download link has been expired. Could you check it? or could you send the source code to me via email: seekingvengeance4solace@gmail.com
thanks for this example. Can you explain to me more how have you improved the performance of the detection since you still use cascade classifier files and you add the template matching technique? it seems to improve the quality of the detection but not the detection’s time. Isn’t?
Cascade classifier used for face detection is LBP so its faster than clasic Haar one and its used all the time (I didnt try to replace it by matching, because I think, that it will be inaccurate because of hair and eyebrowns) Classifiers for eyes are clasic Haar ones and are used only for creating template in first five frames, then only LBP classifier is used. So reducing ROI by LPB + crop this area only for eye region + split region for each eye + using template matching for eyes, save some computing time.
You are great u gave a basic tutorial to handle opencv in android … keep going can u give me your mail id so that i could i ask my doubt regarding this Mr.Roman mine is hariprasath690@gmail.com
hello ,
while testing this tutorial and other samples of opencv 2.4.2 i noticed that the app using video always crush.Even this tutorial , does it happens for you too ? is it related to the device which i’m using?
thanks in advance.
What kind of video do you mean? Import video to the app? I didnt try it because when I write the original code, video import was not implemented in wrapper yet. Please be more specific about importing video.
i meant to display video not to import video to the app
Ok, all samples works fine for me, what device/android version do you have? Is OpenCVManager app up to date?
galaxy s3 api 4.1 and opencCVManager updated.the samples work for 1 min then crashed.
ARM Cortex-A9 MPCore quadruple cœur 1,4 GHz
and what error is in logcat? I looks like to some bug in native camera.
thanks it’s ok now using opencv2.4.3.
Hello,
Is it possible to detect eye blinks by using this sample application ?. Can you give me some hints to do this ?
Thanks
hi, i have problems in using the front camera of this application. I want to make it portrait, but the image was mirrored. How can i fix this? Also, the iris location is located using for loop one eye by one eye, is it possible to locate the midpoint of the eye? Because i want to track the eyeball movement relative to the screen. thanks a lot! xo
thanks for your code!! it is very useful to me , but how can increase the accuracy of detecting the eyes? because if we need to detect eye blink, the first thing is to increase the accuracy on detecting the eyes stably. Can you give me some ideas?
thank you so much!
for eye blinks you could try difference matrix between frames – blink and detect connected components – two areas with circa the same size, mirrored by y axis. It will be in some of next tutorials.
Hey roman very good, congratulations. I’ll try it and I’ll leave my comments. Just wanted to ask which algorithm used to locate the iris of the eye;
Hi, because of reduced ROI (without eyebrowns), I simply use minMacLoc to locate the darkest pixel.
Roman, thank you for your answer
is there other toturial how to improve the face detection performance like sckin detection or something else?
for example the optic flow + kilman Filter to improve the tracking task.
Roman, nice sample! We referenced it on http://opencv.org/platforms/android.html (see ‚Online resources‘ section).
Thanks a lot Andrey, I really apreciate that! Next samples will be comming soon.
hi roman i have question concerning opencv can i send you my question by email?
yep, of course
hii roman,
really thanks for ur application.I currently developed an Eye tracking system for disabled people using gaze detection.Im really looking forward now to try out ur sample. Im currently having jelly bean version in my mobile.will this example work out with it???
Hi, yes JB is supported as the OpenCV Manager application takes care about it
hii roman,
thanks for ur reply..when im running ur code im getting an error saying that ndk.cmd path is not found in project.Please help me with this..thanks 😉
hi roman,
how to check the eye open state and close state from this sample. For example, if the eye is open, a message should display „Eye is Open“ and if it is closed, a message should display „Eye closed“. Can you tel me the code to do this ?…Thanks 🙂
hi Roman, thanks for this great sample!! Could you please guide to show how it can be used to detect an eye ‚blink‘ .
I’m a novice and interested in developing an Eye blink detection using opencv in a project to prevent a driver of a four wheeler to fall asleep/get drowsy while driving.
thanks
this tutorial is part from my work of driver drowsiness monitor, but iris detection is still too inaccurately
Great tutorial …just awesome…
I am trying to accomplish face recognition on android..Can you provide me with some pointers….
Hi, for recognition, look for Eigenfaces example
I have tested your app on my samsung galaxy a4 and it doesn’t work…I use opencv 2.4.5 with intelliJ..
Hi, have you any logcat, error messages?
I wuld like to do iris recognition in javacv . can you help me ?
Im just working on iris recognition, but its stil too inaccurately. I will post it as new tutorial in the future if I found some more robust solutions.
Hi Roman,
Thanks a lot for this awesome work, I applied it using eclipse after downloading „tegra developer pack2“, but I got too many errors that said „can not be resolved to a type“, can you please help me to fix by giving me some hints,
Thanks.
Hi Suzan,
please send me screenshot/description of those errors to romanhosek.cz@gmail.com
Thanks
Hi Roman,
How do you make that the camera is in portrait in your code ???
thanks for this example.
I need ‚android eye tracking tutorial‘ for my project. I searched every where but couldn’t find.
Thank you very much for supporting us by providing these tutorials
Hi Roman,
I’ve been playing around with your code for a while now and it works perfectly as you describe it. However, I was just wondering if there is a way to detect if the eyes are closed. If I understand correctly, in the case of TM_SQDIFF, the template is compared to the camera image and the template matching will look for the highest matching value and move the yellow rectangles over to that area. How I would like the program to work is: If it detects the eyes where they should be the rectangles will stay yellow, else the rectangles will be a different color. If you could help me out in anyway I would greatly appreciate it.
Thank you for all of your hard work.
Hey Roman,
I tried to test your application on my Sony Xperia Z but immediately after the app started it crashes and i get the error message in logcat: Choregrapher – Skipped 48 frames! The application may be doing too much work on its main thread
Aborthing: heap memory corruption in tmalloc_large addr=0x6832ca
Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 18372 – could you give me a hint what I would have to change? Or is it possible that the app only runs in the emulator or only on certain phones?
Send me whole Logcat please, maybe there could be compatibility problem with opencv native camera.
Hi really vary helpful tutorial,it will be vary useful for me for my next project but i want to ask you that in my case i have to do it like if i am focusing on one icon on screen after 3 sec the icon should be click and same for the second user if user focus on button it should click after 5 sec ,so is it possible to do with this android eye detector or can you please suggest me how can i create such this with use of it.thanks in advance waiting for reply
Yes its possible, I saw this tracking on iOs, but not with this example – you have to track angle and some additional computations. Maybe if you reeeeally simplify the checking, you can show eg one eye on whole screen and track pupil against the position of button on the screen.
Thank you for your contribution.
You are welcome. Thanks for comment!
Hi Roman
This is very much interesting , i have a few ideas hope you would like t them may i share then via email ?
many thanks
Jacob
can i know what algorithm you use in this case ?
Hi Roman
Thank you for sharing, I have a question
I don’t want to open camera, I just want open image so how to open image
I use: Mat image = Highgui.imread(„/mnt/sdcard/ban.jpg“); but it cannot open
Thanks,
Chi
hi ,roman
thanks for providing this tutorial i download the project source code and showing some error in import like
org.opencv.core.Mat
org.opencv.highgui.VideoCapture.
please help me
Hi Rahul, did you link opencv library to the project? It seems like bad import.
ya i imported openCV library(2.4.2) project to my work space but it showing some error in gen floder OpenCVEngineInterface ,getLibraryList(java.lang.String version) throws android.os.RemoteException,i flow the steps you mentioned for library project but its showing error in the org.opencv.engine package in gen floder
Did you try Clean and Build? Another question do you use Eclipse (ant build) or Android Studio (gradle build)? In case of AS and gradle, ask opencv team if current gradle support this.
Hello friend, I am getting negative coordinates with your code, informing me know if you have a problem? Thanks
Hello Roman,
Plzzzz help me..I have mailed you my problem with description.. Unable to sort out from 4 days…
Waiting for your rply..
Hi, I just replied to your email.
Hi Roman,
THANKS FOR YOUR HELP..BUT NOW I AM GETTING SOME OTHER PROBLEM i.e heap memory corruption in malloc_small..
I am mailing you the whole logcat through mail..plz help..thank u..
Hi Roman,
This tutorial was really useful.
However I’m looking for some help for gaze/pupil detection using Object Tracking functions of opencv. (http://opencv.willowgarage.com/documentation/c/video_motion_analysis_and_object_tracking.html)
Can you provide some guidance as to where I can get good tutorials as whatever I have found so far hasn’t been useful.
Tried the stuff, however I notice that camera preview is not coming up , am digging into the code and checking If there is anything amiss WRT to openCV library version etc. On a side note I noticed that Face detection uses Camera mCamera your code uses VideoCapture mCamera can this be done with the Camera mCamera too ?
probably yes, I see that OpenCV team goes with different camera handling in new versions, but I don’t have time to check it / update the sample now.
Hi Roman, first off thanks for this great tutorial.
I have searched the web in any direction, but I couldn’t find the solution to the problem I have with your project.
I get and error on application startup:
08-11 08:18:55.406: E/OpenCV::camera(6779): CameraWrapperConnector::connectToLib ERROR: cannot dlopen camera wrapper library
Funny thing is that the original openCV face detection sample works. I tried to copy the native camera libs into the project and modify the makefile to install the camera modules. No luck.
Maybe you have any hint?
Thanks,
Andrea
Ok, forget the above please. It didn’t work, because openCV is not supported on the latest 4.3 update that came on the Nexus. Thanks!
Hi roman,
I’ve some questions…1st, do you recommend opencv 2.4.2 version or 2.4.6 version? I got those two, and I just read your update post of 2.4.6 version. I’m newbie so I’m not sure if its better (or easier) the older version.
2nd, If I understand correctly, you can detect eye in base of the darkness point. I’m trying to detect eye with Hough Transform ( find a circle). You think it’s possible? Can you help me with this?
And finally, thank you…really, THANK YOU for posting this. It helps me a lot
(it is the only example I’ve understood).
Ps: Sorry about my english, I’m still learning this languaje.
Hi, definitely try 2.4.6 – its only one class and if you download my sample, its NKD free – better for beginners.
I detect eye by haar detector, then reduce the area and find the darkest point – this point is centeroid of new rectangle – which created the eye template… then its simple template matching. Yes, I tried to use HouhgCircles, while trying to detect open/closed eyes, but maybe with my hw and light conditions, it was too inaccurate for me.
Hi, this looks quiet the same to the Eye Detection and Matching what I am doing with OpenCV on the iPad! I am interested in the performance of your application! Do you calculate your fps? What is in general possible with the front camera of your phone in a video stream and what is the actual fps during the template matching? Thanks Jules
Congratulations for the tutorial!
I have instaled your application in my device but when I run nothing is shown in the cameraview . What do you think is the problem.
Please relp me
On which device model?
Roman…Why this sample application not working on SONY mobile phones ?…I tried your application on a SONY XPERIA P…but after opening application it suddenly get closed after few seconds…But when I tried on SAMSUNG NEXUS it works perfectly
I found problem on Samsung devices as well. I will check it.
when i run on the android emulator this error is raised Cannot connect to OpenCV Manager
And is OpenCV Manager installed?
how to install it on the emulator
Hi Roman,
I liked your work and added some eye blink detection code and run it on Samsung galaxy s4, but I got „Blink has stopped“, can you please help me to fix this problem and I’ll be appreciated.
can share your blink detection code with us ?…I’m also working on a blink detection project..Thankx
Yes, send me your email.
Suzan
heltonoliveira11@hotmail.com
Hi Suzan,
Can u please provide me the same as well? I’m working on a project and it needs blink detection.
I’ll be thankful to you.
Vikas Singh
Hi Suzan,
Can you please provide me the blink detection code as well?
It would be very helpful for me.
Thanks,
Shahar
email: hazutsha@gmail.com
Send me your email.
Hi Suzan,
It’s vikas.singh1188@gmail.com Please send that blink detection project. Please share some insights also, how can I detect the blink of the eyes separately.
Thanks in advance.
Hi Suzan,
Could u please provide me the blink detection code?
I’m also running it on Samsung galaxy s4.
It would be very helpful for me,
Thanks
Mingjo
Hello Suzan,
Can you share with me your code of blink detection
thxxx 🙂
Hello Suzan,
Can you share with me your code of blink detection
this is my email: amidoo1978@gmail.com
thxxx 🙂
this is my email: amidoo1978@gmail.com
Hi Suzan,
Could u send me blink detection code please?
This is my e-mail : yasinkaralurt@hotmail.com
Thank you.
Check your email, I sent it to you.
suzan_anwer2000@yahoo.com
plz also share with me..
danyal550@gmail.com
can u sent to me to i would like to learn to do blink detection
my email : ibrahims.saputra99@gmail.com
or do u had blog or website to see please tell me
THANKS 🙂
I’m working on creating a fatigue sensor for android!! Reading the comments of a tutorial, QI received u have a code for blink detection, would you pass me this code so I can estudalo??
Thank you! and I’m sorry for the way I wrote’m still learning this language.
Hi, Sara
can you please send the blinking code? I am also working on it.
Thanks
My email address is: fastian.faraz@gmail.com
Hi Sara,
Can u please provide me the same as well? I’m working on a project and it needs blink detection.
I’ll be thankful to you.
email: 28683096@qq.com
Hi Sara,
Could u please provide me the blink detection code?
It would be very helpful for me, please.
email: hisua1215@gmail.com
Hi Sara, could you also send me a copyo f the blink detection code? thanks
sandgodoven1@gmail.com
Hi Sara would you please share with me the eye blinking code? 🙂 Many Many thanks in advance 😀
Hi Sara would you please share with me the eye blinking code? 🙂 Many Many thanks in advance 😀 my email mhd.adeeb.masoud@gmail.com
Can you please share the blink detection code with me, I am working on a project which need this and I am stuck from long time.
My email id : rsonone42@gmail.com
hi Roman,it is nice tutorial …!
i wander by seeing this example.
iam trying to scrool list view when the eye reached to end of list . from past 3 months please can you help me in this…..
thanks in advance.
Thanks for this tutorial. We are waiting for your blink detection tutorial 🙂
Hi Roman,
I downloaded your App “Eye Detect Sample” from the Download Eclipse Project
I tried to run it from Eclipse and I had many errors but I fixed them. Finally I stacked with following errors or warning:
„I SENT YOU EMAIL TAKING SNAP SHOT OF THE ERRORS“
Please try to help me ASAP
Best regards,
Adil
Thank you for your contribution!!
thank you man for a great tutorial..
keep it up bro..
hello!
can you provide full source code of this app..
finding some bugs while run..
thanks in adv
There is link to download full Eclipse project at the end of the article.
There are some errors with connecting the camera on some devices, which I’m checking it but currently I’m too busy at work at the moment.
hiiii roman !
Please Help
I Download your projet and do import for java android but it wasn’t work ?
Probably bad settings…
Hi
I am new started with open cv for android .my project is eye detection on android phone. when i import library open cv 2.4.6 I can run some sample project camera control,color-blob-detec,samples-puzzle15, but i cannot run Sample face detection and camera preview. plz help me
Hi, it looks like you have bad setup of native code builder. Please see tutorials on OpenCV website about it first (link is in the first article)
Hello ,Roman
I wanna detect open eyes only.can i detect blinking eyes ,if yes then give idea plz
hi Roman
thanks you for last advise .i can run sample face detection.now i have some problem when i try to import your project ican but it can’t show picture on my phone. it has only black screen. in error log tell me that Fatal signal 11 (SIGSEGV) at 0x00000000 (code=1)
Please need to code for Portrait mode
Hi, roman,
Thanks for your sharing and quickly response to others needed. I have a question: how can I get the eye’s position as a viewer’s location? that means how could I get the eye’s position (x,y,z(maybe distance between eye and screen)?
Any idea will be welcom.
project on c language got a problem -> (Description Resource Path Location Type
Program „\ NDK-build.cmd“ not found in PATH OpenCV Sample – face-detection for C / C + + Problem)
How to be solved ?
help me .
Error description is clear, you have wrong PATH or NDKROOT. Depending which operating system you use, check http://docs.opencv.org/doc/tutorials/introduction/android_binary_package/O4A_SDK.html#import-opencv-library-and-samples-to-the-eclipse (this link is mentioned at the beginning the tutorial, please read the instructions in this tutorial more carefully before sending me email, I get lot of emails with such a common errors, mistakes and I’m not able to reply to all of them)
Hey. .i already run your app. .and it works fine. . .
i have something to add which i could not fix. .i need to create an alarm if the one eye is in close state after 1 second. .can you give me some process because i already did everything but i could not fix it. .can you help me?. .cruser_pan90@yahoo.com
Hi Roman
Thank you for this great tutorial.
I’m working on my graduation project which helps the doctor to take the students attendance using face recognition so i downloaded your tutorial but there’s is an error says FdView cannot be resolved and i don’t know how to fix it and can you send me any advises about how can i do my application by face recognition.
best regards
Asem Battah
asembattah@gmail.com
Hi Roman
Thank you for this great tutorial.
I have some questions.
Can I ask for advice to you?
hi
thank you for the tuto
i want just ask if there is any possibility to add mouth detection to this tutorial
i got some error how to solve it
java.lang.UnsatisfiedLinkError: Native method not found: org.opencv.samples.facedetect.DetectionBasedTracker.nativeCreateObject:(Ljava/lang/String;I)J
hi roman,
bro am doing my final year project „driver drowsiness detection android app“
i really need your help
thanx
Can anyone share blink detection code my id :
mahirakonda@gmail.com
Roman you are the best!, thanks.
Maybe it is a good idea to put the code into GitHub, for example to know what change you did for fix the Samsung errors.
does this method use equalization? thankls.
Hi Roman; sorry to bother you just a quick question,
1) does this use histogram?
2)is a template for both eyes created
the eye region, was this algorithm made by trial and error?
I have so many questions about this! Can you email me? I am trying to use this in a scientific study and am modifying it a bit.
Hi roman ,
Great Job manh. i am developeing application for physically disable people and which based on face detection and for that i used your tutorial. Really helped a lot . but i want to ask you something . when i face detect using detectMultiScale, changing position of face detected frequently.
any solution for this ??
because i want to implement eye gazing and for that prior condition to stop this.
thanks in advance.
@suzan
Can u send ur blink detection code with me?
thank u
thanks a lot for very good lessons and tutorials . Im searching c++ code like your project . do you have any c++ eye blinking project ? would you please send me code of this project?
hi suzan would you please send me eye blind detection code to me ?
my email : m.monzavi@iseikco.com
Hi Roman!
I can’t find a sample of downloadable sample at the end page, may u help ?
Hi, thanks for the hint, seems that update of Download Manager for WP hide it.
Should be fixed now.
when I import openCv, an error occurs,
„Errors running builder ‚Android Package Builder‘ on project ‚OpenCV Library – @OPENCV_VERSION@‘.
Resource ‚/OpenCV Library – @OPENCV_VERSION@/bin‘ does not exist.
Resource ‚/OpenCV Library – @OPENCV_VERSION@/bin‘ does not exist.“
please help me :/
hi Roman.
how can i detect the eye opened or closed ?
can u help ?
Hi Roman,
I get some error, Log Chat: libnative_camera_r4.3.0.so,libnative_camera_r4.0.3.solibnative_camera_r4.4.0.so.. exc. and Fatal Signal 11. My device LG G2 version 4.4.2.
How can i fix this problem ?
Hi Roman, its a nice step by step tutorial..
I just have to change a little bit and it work like a charms.
I think I will develop it for my final project in college.
I am doing Face Expression Recognition with Haar algorithm, I found your tutorial and try it, its a good begining man.
it would be great if you can point me out how to get the mounth area, cause I need it for feature extraction for the later calculation to get the expression
I’d like to hear your opinion.
Thanks
Fery J
Hi Roman !
when I import openCv, an error occurs,
„Errors running builder ‚Android Package Builder‘ on project ‚OpenCV Library – @OPENCV_VERSION@‘.
Resource ‚/OpenCV Library – @OPENCV_VERSION@/bin‘ does not exist.
Resource ‚/OpenCV Library – @OPENCV_VERSION@/bin‘ does not exist.“
please help me :/
hi roman,
can you please help me how to detect the mouth by haarcascade_mouth.xml and then find the teeth white color (as of black pupil in this case).
Hi Roman!
Thanks in advance for the tutorial. A few weeks ago this was my start point. By now I want to develop my own classificator so I can set whenever the eye is open or closed. I know there are a bunch of ways to do that, but my intent is compare a couple. So, would you know how to create a wholly new XML in „.raw“ with my training images?
Thanks again.
i have also import this project in to eclipse having open cv but having too much error can u help me in this situation ??
I used Android Studio at first!!
I want to build this source code, but I don’t know what is this code meaning…
Can you give me a information that I write these source codes where????ㅜㅜ
Hi Roman,
I need to save only detected face by your code, but unable to do so.
Please help me how can I achieve this.
Waiting for your reply.
hi sara and suzan, would you please send me eye blind detection code to me ?
my email“ : e.aghakouchaki@yahoo.com
Hi Roman,
This is really amazing tutorial. Now, I want to find corners of the eyes and then I want to find the coordinates of the screen i-e where the user is looking at the screen. Can you please guide me ??
I am working on android studio with opencv.
Thanks in advance.
Very Good tutorial!! i have a proyect involving detecting pupils from a BITMAP instead of using the camera. Can anyone help me with this? thank you sooo much in advance
Hi Roman,
I used Android Studio to import this project, but it failed.
The error was:
„* Project OpenCV-Sample … project.properties:
Library Reference …\sdk\java could not be found“
I try copying some codes above to my project (have successful added OpenCV), but it doesn’t work, too.
Can you please guide me ??
Waiting for your reply.
Thank you!!
Halo sir. How About Watershed with Android OpenCV, can you help me ? Thanks before
Hi,
My company just create a 2 camera system for car. Front cam and cabin cam can record 2 video stream 1080px30 and 720px30
Cabin camera can adjust to aim driver’s face. SDK is ready by use lite Android OS.
Anyone interest to develop your drowsy detection system on to this 2 camera system? This will be a first consumer product which is very meaningful to the car safety industry.
Please contact me at jasonh@teamresearchinc.com
Best,
Hi Sara,
Could u please provide me the blink detection code?
It would be very helpful for me, please.
email: cantoniadis79@hotmail.com
Hi Roman Hosek
Above code for pupil detection is exactly pointed at Pupil area for few Images,but problem is for few images pupil detection(white circle) is placed near or above the eye lashes.
Can u please help me please
Waiting for your reply
Thank u
email:deepaandroid99@gmail.com
Hi Thank you for the tutorial .I need some help i am new to OpenCV and i wanna know that how can we detect eye with OpenCV and and store the detected Eye In database and on the other hands How can we detect the Eye Again and if it matches with the already stored eye in database ,will generate some message on screen else show some other message ? Thank You Will be waiting
Zeeshan Riaz
Knowldge Platform .
Hi Suzan,
Can u please provide me the same as well? I’m working on a project and it needs blink detection.
I’ll be thankful to you.
My Email id is : rsonone42@gmail.com
Blink detection code I mean
Roman Hošek!! I am commenting here now. I configured opencv in android studio now its configured and i can import the opencv stuff but how to do this code in open please make some video i want to change eye color in real time
Hi Roman, can you please tell e how to take pictures using this code. Please Please i need help
how can i change eye color? using the same code ? is it possible. please guide me
is it possible to use this code without OpenCV manager? or is it possible to install it automatically along with our app? I don’t want to ask the user to install other app
Thank You
Hi, yes you can bundle specific opencv version in your project => static initialization, please look at this tutorial:
Application development with static initialization
Hi, If i using Android studio izit working with this? What is different eclipse and android studio? Which tools more easy to use for beginner ?
Hi roman, Is it possible to crop and save these image into external directory… can u pls kindly send me the code for that and i’m also got an error while running.. that is „Build of configuration Default for project OpenCV Sample – face-detection ****
\ndk-build
Cannot run program „\ndk-build“: Launching failed
Error: Program „/ndk-build“ is not found in PATH
PATH=[C:/Program Files/Java/jre1.8.0_101/bin/client;C:/Program Files/Java/jre1.8.0_101/bin;C:/Program Files/Java/jre1.8.0_101/lib/i386;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Program Files\Microsoft SQL Server\90\Tools\binn\;;E:\Error\Andorid For Students\adt-bundle-windows-x86-20131030\eclipse;]
**** Build Finished ****
“
But i follow all instructions what u given..pls kindly help me….. thanks..
Hi, Where is your eye blink tutorial link?
Or can share the source code to : yihui225@gmail.com
Thanks
I really need you help.
One more question: How i can use android studio to combine multiple project?
Hi i am new at Opencv i want to match camera image with android sqlite storage image for searching purpose.but I did not get any solution .I am tired by trying more and more Can you please give me any Idea .
Actually I am developing a insect finder project thats why I need to match my sqlite dabase storage image with camera image .If any Idea please help me .I will be great full to you .
Hello! Mr. Roman, I am a Chinese university student. I have been researching face recognition recently, but I have shown a lot of mistakes after importing your code in eclipse. I do not know how to solve it. I do not know how to issue the problem to you, I cut the map, your code can be placed on the Windows platform opencv-2.4.9 in it? (Eclipse compiler) Console always appears under the „\\ ndk-build.cmd“
Can not run program „\ ndk-build.cmd“: Launching failed problem, but I have opencv-2.4.9 into the eclipse, I hope to get your help! Thank you!
Hi, this code is suposed to be used in Android Studio – afaik Eclipse is no longer supported as the main Android IDE.
On what OS are working? On Linux you need to use just ndk-build without any extension. Please check that your path to NDK folder is right – for Android studio you can find it in gradle.settings as ndk.path
Hello there! Mr. Romman, please forgive me again disturb you, my NDK environment configuration is correct, but I import the project, the following problem: Properties:
Library reference .. \ .. \ sdk \ java could not be found
I was downloaded directly from your Tutorial, and I was in the Android studio2.2.2 in the import of the project, I downloaded from the Tutorial, But I have this kind of problem, do not know how to solve, but I did not in the Linux system, I was operating under the Windows operating system, I do not know if this will be the problem? Trouble you help me get advice, thank you again!
Hello!Thank you for providing this great example for us.But can you explain sth about the code“e.x = area.x + e.x;“?Thank you very much!
Hi Roman facing following error while running opencv 3.2.0 in eclipse juno with Android 6.0
„R can not be resolved to a variable “ which is used in following files
R.raw.lbpcascade_frontalface
R.layout.face_detect_surface_view
R.id.fd_activity_surface_view
Please revert ASAP.
Hello sir,
Your tutorial is a very nice.but i need a code in android studio.its not working in android studio.error message in all Highgui and videocapture is a cannot resolve type.I fine a solution ..highgui in replace the imagdocs.but error is not solves that.
Error
mCamera.set(Highgui.CV_CAP_PROP_FRAME_WIDTH, mFrameWidth);
mCamera.set(Highgui.CV_CAP_PROP_FRAME_HEIGHT, mFrameHeight);
i replace highgui in
mCamera.set(Imagedocs.CV_CAP_PROP_FRAME_WIDTH, mFrameWidth);
mCamera.set(Imagedocs.CV_CAP_PROP_FRAME_HEIGHT, mFrameHeight);
.CV_CAP_PROP_FRAME_WIDTH, mFrameWidth and CV_CAP_PROP_FRAME_HEIGHT, mFrameHeigh in error show.how to solve this??\
Please ,please send me android studio code.
Hello sir,
can you share me android studio code ??? it is a not working in android studio .i use in opencv library 2.4.0 is not imported all packege.
org.opencv.highgui.VideoCapture.
import org.opencv.imgcodecs.VideoCapture;
Please send me code this emailid.. idev02inkc@gmail.com
Hello there!
Just to say thank you for sharing your knowledge. I´v successfully installed the project, and even added some features for a class project. Again, thank you from a beginner Android developer.
Hello from France !
We are managing a fablab dedicated to opensource and disabilities, and Nicolas has always his phone behind his electric wheelchair. It would be great to implement eye tracking as an android app able to send bluetooth data emulating a mouse. Please provide any usefull link that could help us to help him.
Congrats for the work and the sharing !!!
Hi, Roman. thanks for this great tutorial. it works very well to me. In addition, is it possible to get the position of the target eye without displaying the camera view to the Lcm? in my case, I want to get the position of the user’s eye and playing a video with mediaplayer for the user. I found that if the mediaplayer runs, the eye tracking no longer tracking anymore. why is it? is it because of the
view is taken away by the mediaplayer? if so, how to fix this problem? In addition, I had found that there is a class CameraGLSurfaceView in openCV3+. is it a good choice for eye tracking only application? do you know how to use this class? thank you
Hi Roman
did you develope an open-eye recognition javascript application too?
If yes, I would be interested in and I would like to share with you a great opportunity about this.
Massimiliano Mazzarotto – Italy