{"id":5564,"date":"2020-01-24T14:43:35","date_gmt":"2020-01-24T09:13:35","guid":{"rendered":"https:\/\/www.innovationm.com\/blog\/?p=5564"},"modified":"2026-01-14T11:47:38","modified_gmt":"2026-01-14T06:17:38","slug":"capture-image-on-eye-blink","status":"publish","type":"post","link":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/","title":{"rendered":"Capture Image on Eye Blink"},"content":{"rendered":"<p>The <strong>Mobile Vision AP<\/strong>I provides a framework for recognizing objects in photos and videos. The framework includes\u00a0<b>detectors<\/b>, which locate and describe visual objects in images or video frames, and an <strong>event-driven API<\/strong> that tracks the position of those objects in video.<\/p>\n<p>The objects that can be tracked by <strong>Mobile Vision API<\/strong> include facial features, text and bar codes.<\/p>\n<p>For our purpose, we will be doing real-time tracking of face objects using a custom camera. So let&#8217;s get started.<\/p>\n<p>First, create a new project and add the following dependencies in your app level <strong>build.gradle<\/strong> file which is available at location:\u00a0<strong>app\/build.gradle<\/strong>.<\/p>\n<pre class=\"lang:java decode:true\">dependencies {\r\n    \/*... other dependencies ...*\/\r\n    implementation 'com.google.android.gms:play-services-vision:17.0.2'\r\n}<\/pre>\n<p>Now open <strong>strings.xml<\/strong> file at location <strong>res\/values\/strings.xml<\/strong> and add the following in it.<\/p>\n<pre class=\"lang:default decode:true\">&lt;string name=\"take_photo\"&gt;Blink your eyes to capture photo&lt;\/string&gt;\r\n&lt;string name=\"permission_required\"&gt;Permission Required&lt;\/string&gt;\r\n&lt;string name=\"permission_message\"&gt;You must grant permission to access camera to run this application.&lt;\/string&gt;\r\n&lt;string name=\"permission_warning\"&gt;All permissions are required.&lt;\/string&gt;<\/pre>\n<p>Now let&#8217;s build our layout file. So open <strong>activity_main.xml<\/strong> file and add the below code.<\/p>\n<pre class=\"lang:default decode:true \">&lt;?xml version=\"1.0\" encoding=\"utf-8\"?&gt;\r\n&lt;FrameLayout xmlns:android=\"http:\/\/schemas.android.com\/apk\/res\/android\"\r\n    android:layout_width=\"match_parent\"\r\n    android:layout_height=\"match_parent\"\r\n    android:orientation=\"vertical\"&gt;\r\n\r\n    &lt;SurfaceView\r\n        android:id=\"@+id\/surfaceView\"\r\n        android:layout_width=\"match_parent\"\r\n        android:layout_height=\"match_parent\"\r\n        android:visibility=\"gone\" \/&gt;\r\n\r\n    &lt;TextView\r\n        android:id=\"@+id\/tv_capture\"\r\n        android:layout_width=\"wrap_content\"\r\n        android:layout_height=\"wrap_content\"\r\n        android:layout_gravity=\"center_horizontal|bottom\"\r\n        android:background=\"#8cffffff\"\r\n        android:padding=\"20dp\"\r\n        android:text=\"@string\/take_photo\"\r\n        android:textStyle=\"bold\"\r\n        android:visibility=\"gone\" \/&gt;\r\n&lt;\/FrameLayout&gt;<\/pre>\n<p>Here SurfaceView is used for the camera instance that we will be creating in the latter part of this post.<\/p>\n<p>Now in the <strong>MainActivity.java<\/strong>, add the below code<\/p>\n<pre class=\"lang:java decode:true\">package com.test.camerademo.ui;\r\n\r\nimport android.Manifest;\r\nimport android.app.AlertDialog;\r\nimport android.content.DialogInterface;\r\nimport android.content.Intent;\r\nimport android.content.pm.PackageManager;\r\nimport android.graphics.Bitmap;\r\nimport android.graphics.BitmapFactory;\r\nimport android.os.Bundle;\r\nimport android.os.Handler;\r\nimport android.os.Looper;\r\nimport android.support.annotation.NonNull;\r\nimport android.support.v4.app.ActivityCompat;\r\nimport android.support.v4.content.ContextCompat;\r\nimport android.support.v7.app.AppCompatActivity;\r\nimport android.util.Log;\r\nimport android.view.SurfaceHolder;\r\nimport android.view.SurfaceView;\r\nimport android.view.View;\r\nimport android.widget.Toast;\r\n\r\nimport com.google.android.gms.vision.CameraSource;\r\nimport com.google.android.gms.vision.face.FaceDetector;\r\nimport com.google.android.gms.vision.face.LargestFaceFocusingProcessor;\r\nimport com.test.camerademo.R;\r\n\r\nimport java.io.IOException;\r\nimport java.util.ArrayList;\r\n\r\nimport static android.Manifest.permission.CAMERA;\r\n\r\npublic class MainActivity extends AppCompatActivity implements SurfaceHolder.Callback, CameraSource.PictureCallback {\r\n\r\n    public static final int CAMERA_REQUEST = 101;\r\n    public static Bitmap bitmap;\r\n    private SurfaceHolder surfaceHolder;\r\n    private SurfaceView surfaceView;\r\n    private String[] neededPermissions = new String[]{CAMERA};\r\n    private FaceDetector detector;\r\n    private CameraSource cameraSource;\r\n\r\n    @Override\r\n    protected void onCreate(Bundle savedInstanceState) {\r\n        super.onCreate(savedInstanceState);\r\n        setContentView(R.layout.activity_main);\r\n\r\n        surfaceView = findViewById(R.id.surfaceView);\r\n\r\n        detector = new FaceDetector.Builder(this)\r\n                .setProminentFaceOnly(true) \/\/ optimize for single, relatively large face\r\n                .setTrackingEnabled(true) \/\/ enable face tracking\r\n                .setClassificationType(\/* eyes open and smile *\/ FaceDetector.ALL_CLASSIFICATIONS)\r\n                .setMode(FaceDetector.FAST_MODE) \/\/ for one face this is OK\r\n                .build();\r\n\r\n        if (!detector.isOperational()) {\r\n            Log.w(\"MainActivity\", \"Detector Dependencies are not yet available\");\r\n        } else {\r\n            Log.w(\"MainActivity\", \"Detector Dependencies are available\");\r\n            if (surfaceView != null) {\r\n                boolean result = checkPermission();\r\n                if (result) {\r\n                    setViewVisibility(R.id.tv_capture);\r\n                    setViewVisibility(R.id.surfaceView);\r\n                    setupSurfaceHolder();\r\n                }\r\n            }\r\n        }\r\n    }\r\n\r\n    private boolean checkPermission() {\r\n        ArrayList&lt;String&gt; permissionsNotGranted = new ArrayList&lt;&gt;();\r\n        for (String permission : neededPermissions) {\r\n            if (ContextCompat.checkSelfPermission(this, permission) != PackageManager.PERMISSION_GRANTED) {\r\n                permissionsNotGranted.add(permission);\r\n            }\r\n        }\r\n        if (!permissionsNotGranted.isEmpty()) {\r\n            boolean shouldShowAlert = false;\r\n            for (String permission : permissionsNotGranted) {\r\n                shouldShowAlert = ActivityCompat.shouldShowRequestPermissionRationale(this, permission);\r\n            }\r\n            if (shouldShowAlert) {\r\n                showPermissionAlert(permissionsNotGranted.toArray(new String[permissionsNotGranted.size()]));\r\n            } else {\r\n                requestPermissions(permissionsNotGranted.toArray(new String[permissionsNotGranted.size()]));\r\n            }\r\n            return false;\r\n        }\r\n        return true;\r\n    }\r\n\r\n    private void showPermissionAlert(final String[] permissions) {\r\n        AlertDialog.Builder alertBuilder = new AlertDialog.Builder(this);\r\n        alertBuilder.setCancelable(true);\r\n        alertBuilder.setTitle(R.string.permission_required);\r\n        alertBuilder.setMessage(R.string.permission_message);\r\n        alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() {\r\n            public void onClick(DialogInterface dialog, int which) {\r\n                requestPermissions(permissions);\r\n            }\r\n        });\r\n        AlertDialog alert = alertBuilder.create();\r\n        alert.show();\r\n    }\r\n\r\n    private void requestPermissions(String[] permissions) {\r\n        ActivityCompat.requestPermissions(MainActivity.this, permissions, CAMERA_REQUEST);\r\n    }\r\n\r\n    @Override\r\n    public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {\r\n        if (requestCode == CAMERA_REQUEST) {\r\n            for (int result : grantResults) {\r\n                if (result == PackageManager.PERMISSION_DENIED) {\r\n                    Toast.makeText(MainActivity.this, R.string.permission_warning, Toast.LENGTH_LONG).show();\r\n                    setViewVisibility(R.id.showPermissionMsg);\r\n                    checkPermission();\r\n                    return;\r\n                }\r\n            }\r\n            setViewVisibility(R.id.tv_capture);\r\n            setViewVisibility(R.id.surfaceView);\r\n            setupSurfaceHolder();\r\n        }\r\n        super.onRequestPermissionsResult(requestCode, permissions, grantResults);\r\n    }\r\n\r\n    private void setViewVisibility(int id) {\r\n        View view = findViewById(id);\r\n        if (view != null) {\r\n            view.setVisibility(View.VISIBLE);\r\n        }\r\n    }\r\n\r\n    private void setupSurfaceHolder() {\r\n        cameraSource = new CameraSource.Builder(this, detector)\r\n                .setFacing(CameraSource.CAMERA_FACING_FRONT)\r\n                .setRequestedFps(2.0f)\r\n                .setAutoFocusEnabled(true)\r\n                .build();\r\n\r\n        surfaceHolder = surfaceView.getHolder();\r\n        surfaceHolder.addCallback(this);\r\n    }\r\n\r\n    public void captureImage() {\r\n        \/\/ We add a delay of 200ms so that image captured is stable.\r\n        new Handler(Looper.getMainLooper()).postDelayed(new Runnable() {\r\n            @Override\r\n            public void run() {\r\n                runOnUiThread(new Runnable() {\r\n                    @Override\r\n                    public void run() {\r\n                        clickImage();\r\n                    }\r\n                });\r\n            }\r\n        }, 200);\r\n    }\r\n\r\n    private void clickImage() {\r\n        if (cameraSource != null) {\r\n            cameraSource.takePicture(null, this);\r\n        }\r\n    }\r\n\r\n    @Override\r\n    public void surfaceCreated(SurfaceHolder surfaceHolder) {\r\n        startCamera();\r\n    }\r\n\r\n    private void startCamera() {\r\n        try {\r\n            if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {\r\n                return;\r\n            }\r\n            cameraSource.start(surfaceHolder);\r\n            detector.setProcessor(new LargestFaceFocusingProcessor(detector,\r\n                    new GraphicFaceTracker(this)));\r\n        } catch (IOException e) {\r\n            e.printStackTrace();\r\n        }\r\n    }\r\n\r\n    @Override\r\n    public void surfaceChanged(SurfaceHolder surfaceHolder, int i, int i1, int i2) {\r\n    }\r\n\r\n    @Override\r\n    public void surfaceDestroyed(SurfaceHolder surfaceHolder) {\r\n        cameraSource.stop();\r\n    }\r\n\r\n    @Override\r\n    public void onPictureTaken(byte[] bytes) {\r\n        bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);\r\n        \/\/ Save or Display image as per your requirements. Here we display the image.\r\n        Intent intent = new Intent(this, PictureActivity.class);\r\n        startActivity(intent);\r\n    }    \r\n}<\/pre>\n<p>Here we create a custom Tracker, i.e, <strong>GraphicFaceTracker<\/strong> which tracks the facial features. We use this to track the blinking of the eyes.<\/p>\n<p>Below is the code for\u00a0<strong>GraphicFaceTracker.java.<\/strong><\/p>\n<pre class=\"\">package com.test.camerademo.ui;\r\n\r\nimport android.util.Log;\r\n\r\nimport com.google.android.gms.vision.Tracker;\r\nimport com.google.android.gms.vision.face.Face;\r\nimport com.google.android.gms.vision.face.FaceDetector;\r\n\r\npublic class GraphicFaceTracker extends Tracker&lt;Face&gt; {\r\n\r\n    private static final float OPEN_THRESHOLD = 0.85f;\r\n    private static final float CLOSE_THRESHOLD = 0.4f;\r\n    private final MainActivity mainActivity;\r\n    private int state = 0;\r\n\r\n    GraphicFaceTracker(MainActivity mainActivity) {\r\n        this.mainActivity = mainActivity;\r\n    }\r\n\r\n    private void blink(float value) {\r\n        switch (state) {\r\n            case 0:\r\n                if (value &gt; OPEN_THRESHOLD) {\r\n                    \/\/ Both eyes are initially open\r\n                    state = 1;\r\n                }\r\n                break;\r\n            case 1:\r\n                if (value &lt; CLOSE_THRESHOLD) {\r\n                    \/\/ Both eyes become closed\r\n                    state = 2;\r\n                }\r\n                break;\r\n            case 2:\r\n                if (value &gt; OPEN_THRESHOLD) {\r\n                    \/\/ Both eyes are open again\r\n                    Log.i(\"Camera Demo\", \"blink has occurred!\");\r\n                    state = 0;\r\n                    mainActivity.captureImage();\r\n                }\r\n                break;\r\n            default:\r\n                break;\r\n        }\r\n    }\r\n\r\n    \/**\r\n     * Update the position\/characteristics of the face within the overlay.\r\n     *\/\r\n    @Override\r\n    public void onUpdate(FaceDetector.Detections&lt;Face&gt; detectionResults, Face face) {\r\n        float left = face.getIsLeftEyeOpenProbability();\r\n        float right = face.getIsRightEyeOpenProbability();\r\n        if ((left == Face.UNCOMPUTED_PROBABILITY) ||\r\n                (right == Face.UNCOMPUTED_PROBABILITY)) {\r\n            \/\/ One of the eyes was not detected.\r\n            return;\r\n        }\r\n\r\n        float value = Math.min(left, right);\r\n        blink(value);\r\n    }\r\n}<\/pre>\n<p>Now once the image has been clicked, we display it in the <strong>PictureActivity.java<\/strong>.<\/p>\n<p>Below is the code for <strong>activity_picture.xml<\/strong>.<\/p>\n<pre class=\"\">&lt;?xml version=\"1.0\" encoding=\"utf-8\"?&gt;\r\n&lt;RelativeLayout xmlns:android=\"http:\/\/schemas.android.com\/apk\/res\/android\"\r\n    xmlns:tools=\"http:\/\/schemas.android.com\/tools\"\r\n    android:layout_width=\"match_parent\"\r\n    android:layout_height=\"match_parent\"\r\n    tools:context=\".ui.PictureActivity\"&gt;\r\n\r\n    &lt;ImageView\r\n        android:id=\"@+id\/img\"\r\n        android:layout_width=\"match_parent\"\r\n        android:layout_height=\"match_parent\"\r\n        android:scaleType=\"fitXY\"\r\n        android:src=\"@mipmap\/ic_launcher\" \/&gt;\r\n\r\n&lt;\/RelativeLayout&gt;<\/pre>\n<p>And the code for <strong>PictureActivity.java<\/strong> is below.<\/p>\n<pre class=\"\">package com.test.camerademo.ui;\r\n\r\nimport android.os.Bundle;\r\nimport android.support.v7.app.AppCompatActivity;\r\nimport android.view.View;\r\nimport android.widget.ImageButton;\r\nimport android.widget.ImageView;\r\n\r\nimport com.test.camerademo.R;\r\n\r\npublic class PictureActivity extends AppCompatActivity {\r\n    private ImageView imageView;\r\n\r\n    @Override\r\n    protected void onCreate(Bundle savedInstanceState) {\r\n        super.onCreate(savedInstanceState);\r\n        setContentView(R.layout.activity_picture);\r\n\r\n        imageView = findViewById(R.id.img);\r\n        imageView.setImageBitmap(MainActivity.bitmap);\r\n    }\r\n}<\/pre>\n<p>Now on running this code, we can see that on blinking our eyes, an image will be captured and displayed.<\/p>\n<p>InnovationM is a globally renowned\u00a0<a href=\"https:\/\/www.innovationm.com\/services\/app-development\/\">Mobile app development company<\/a>\u00a0that caters to a strong &amp; secure Android app development, iOS app development, hybrid app development services. Our commitment &amp; engagement towards our target gives us brighter in the world of technology and has led us to establish success stories consecutively which makes us the best\u00a0iOS app development company.<\/p>\n<p>That&#8217;s all for this post. Hope you enjoyed learning. \ud83d\ude42<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Mobile Vision API provides a framework for recognizing objects in photos and videos. The framework includes\u00a0detectors, which locate and describe visual objects in images or video frames, and an event-driven API that tracks the position of those objects in video. The objects that can be tracked by Mobile Vision API include facial features, text [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":5801,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2,71],"tags":[],"class_list":["post-5564","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-android","category-mobile"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Capture Image on Eye Blink - InnovationM - Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Capture Image on Eye Blink - InnovationM - Blog\" \/>\n<meta property=\"og:description\" content=\"The Mobile Vision API provides a framework for recognizing objects in photos and videos. The framework includes\u00a0detectors, which locate and describe visual objects in images or video frames, and an event-driven API that tracks the position of those objects in video. The objects that can be tracked by Mobile Vision API include facial features, text [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/\" \/>\n<meta property=\"og:site_name\" content=\"InnovationM - Blog\" \/>\n<meta property=\"article:published_time\" content=\"2020-01-24T09:13:35+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-14T06:17:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.innovationm.com\/blog\/wp-content\/uploads\/2020\/01\/Capture-Image-on-Eye-Blink.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1151\" \/>\n\t<meta property=\"og:image:height\" content=\"640\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"InnovationM Admin\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"InnovationM Admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/\"},\"author\":{\"name\":\"InnovationM Admin\",\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/#\\\/schema\\\/person\\\/a831bf4602d69d1fa452e3de0c8862ed\"},\"headline\":\"Capture Image on Eye Blink\",\"datePublished\":\"2020-01-24T09:13:35+00:00\",\"dateModified\":\"2026-01-14T06:17:38+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/\"},\"wordCount\":324,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/01\\\/Capture-Image-on-Eye-Blink.png\",\"articleSection\":[\"Android\",\"Mobile\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/\",\"url\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/\",\"name\":\"Capture Image on Eye Blink - InnovationM - Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/01\\\/Capture-Image-on-Eye-Blink.png\",\"datePublished\":\"2020-01-24T09:13:35+00:00\",\"dateModified\":\"2026-01-14T06:17:38+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/#\\\/schema\\\/person\\\/a831bf4602d69d1fa452e3de0c8862ed\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/01\\\/Capture-Image-on-Eye-Blink.png\",\"contentUrl\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/01\\\/Capture-Image-on-Eye-Blink.png\",\"width\":1151,\"height\":640,\"caption\":\"Capture Image on Eye Blink\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/capture-image-on-eye-blink\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Capture Image on Eye Blink\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/\",\"name\":\"InnovationM - Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/#\\\/schema\\\/person\\\/a831bf4602d69d1fa452e3de0c8862ed\",\"name\":\"InnovationM Admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5c99d9eece9dfbc82297cf34ddd58e9fe05bb52fe66c8f6bf6c0a45bfb6d7629?s=96&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5c99d9eece9dfbc82297cf34ddd58e9fe05bb52fe66c8f6bf6c0a45bfb6d7629?s=96&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5c99d9eece9dfbc82297cf34ddd58e9fe05bb52fe66c8f6bf6c0a45bfb6d7629?s=96&r=g\",\"caption\":\"InnovationM Admin\"},\"sameAs\":[\"http:\\\/\\\/www.innovationm.com\\\/\"],\"url\":\"https:\\\/\\\/www.innovationm.com\\\/blog\\\/author\\\/innovationmadmin\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Capture Image on Eye Blink - InnovationM - Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/","og_locale":"en_US","og_type":"article","og_title":"Capture Image on Eye Blink - InnovationM - Blog","og_description":"The Mobile Vision API provides a framework for recognizing objects in photos and videos. The framework includes\u00a0detectors, which locate and describe visual objects in images or video frames, and an event-driven API that tracks the position of those objects in video. The objects that can be tracked by Mobile Vision API include facial features, text [&hellip;]","og_url":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/","og_site_name":"InnovationM - Blog","article_published_time":"2020-01-24T09:13:35+00:00","article_modified_time":"2026-01-14T06:17:38+00:00","og_image":[{"width":1151,"height":640,"url":"https:\/\/www.innovationm.com\/blog\/wp-content\/uploads\/2020\/01\/Capture-Image-on-Eye-Blink.png","type":"image\/png"}],"author":"InnovationM Admin","twitter_misc":{"Written by":"InnovationM Admin","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/#article","isPartOf":{"@id":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/"},"author":{"name":"InnovationM Admin","@id":"https:\/\/www.innovationm.com\/blog\/#\/schema\/person\/a831bf4602d69d1fa452e3de0c8862ed"},"headline":"Capture Image on Eye Blink","datePublished":"2020-01-24T09:13:35+00:00","dateModified":"2026-01-14T06:17:38+00:00","mainEntityOfPage":{"@id":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/"},"wordCount":324,"commentCount":0,"image":{"@id":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/#primaryimage"},"thumbnailUrl":"https:\/\/www.innovationm.com\/blog\/wp-content\/uploads\/2020\/01\/Capture-Image-on-Eye-Blink.png","articleSection":["Android","Mobile"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/","url":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/","name":"Capture Image on Eye Blink - InnovationM - Blog","isPartOf":{"@id":"https:\/\/www.innovationm.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/#primaryimage"},"image":{"@id":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/#primaryimage"},"thumbnailUrl":"https:\/\/www.innovationm.com\/blog\/wp-content\/uploads\/2020\/01\/Capture-Image-on-Eye-Blink.png","datePublished":"2020-01-24T09:13:35+00:00","dateModified":"2026-01-14T06:17:38+00:00","author":{"@id":"https:\/\/www.innovationm.com\/blog\/#\/schema\/person\/a831bf4602d69d1fa452e3de0c8862ed"},"breadcrumb":{"@id":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/#primaryimage","url":"https:\/\/www.innovationm.com\/blog\/wp-content\/uploads\/2020\/01\/Capture-Image-on-Eye-Blink.png","contentUrl":"https:\/\/www.innovationm.com\/blog\/wp-content\/uploads\/2020\/01\/Capture-Image-on-Eye-Blink.png","width":1151,"height":640,"caption":"Capture Image on Eye Blink"},{"@type":"BreadcrumbList","@id":"https:\/\/www.innovationm.com\/blog\/capture-image-on-eye-blink\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.innovationm.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Capture Image on Eye Blink"}]},{"@type":"WebSite","@id":"https:\/\/www.innovationm.com\/blog\/#website","url":"https:\/\/www.innovationm.com\/blog\/","name":"InnovationM - Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.innovationm.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.innovationm.com\/blog\/#\/schema\/person\/a831bf4602d69d1fa452e3de0c8862ed","name":"InnovationM Admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5c99d9eece9dfbc82297cf34ddd58e9fe05bb52fe66c8f6bf6c0a45bfb6d7629?s=96&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5c99d9eece9dfbc82297cf34ddd58e9fe05bb52fe66c8f6bf6c0a45bfb6d7629?s=96&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5c99d9eece9dfbc82297cf34ddd58e9fe05bb52fe66c8f6bf6c0a45bfb6d7629?s=96&r=g","caption":"InnovationM Admin"},"sameAs":["http:\/\/www.innovationm.com\/"],"url":"https:\/\/www.innovationm.com\/blog\/author\/innovationmadmin\/"}]}},"_links":{"self":[{"href":"https:\/\/www.innovationm.com\/blog\/wp-json\/wp\/v2\/posts\/5564","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.innovationm.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.innovationm.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.innovationm.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.innovationm.com\/blog\/wp-json\/wp\/v2\/comments?post=5564"}],"version-history":[{"count":1,"href":"https:\/\/www.innovationm.com\/blog\/wp-json\/wp\/v2\/posts\/5564\/revisions"}],"predecessor-version":[{"id":8920,"href":"https:\/\/www.innovationm.com\/blog\/wp-json\/wp\/v2\/posts\/5564\/revisions\/8920"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.innovationm.com\/blog\/wp-json\/wp\/v2\/media\/5801"}],"wp:attachment":[{"href":"https:\/\/www.innovationm.com\/blog\/wp-json\/wp\/v2\/media?parent=5564"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.innovationm.com\/blog\/wp-json\/wp\/v2\/categories?post=5564"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.innovationm.com\/blog\/wp-json\/wp\/v2\/tags?post=5564"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}