服务器之家:专注于VPS、云服务器配置技术及软件下载分享
分类导航

PHP教程|ASP.NET教程|Java教程|ASP教程|编程技术|正则表达式|C/C++|IOS|C#|Swift|Android|VB|R语言|JavaScript|易语言|vb.net|

服务器之家 - 编程语言 - C# - .net 通过 WebAPI 调用nsfwjs 进行视频鉴别功能

.net 通过 WebAPI 调用nsfwjs 进行视频鉴别功能

2022-12-05 12:15towerbit C#

这篇文章主要介绍了.net 通过 WebAPI 调用 nsfwjs 进行视频鉴别,文末给大家提到了FFMPEG获取视频关键帧并保存成jpg图像的相关知识,需要的朋友可以参考下

1. npm 安装 nsfwjs

?
1
2
3
4
5
npm install express --save
npm install multer --save
npm install jpeg-js --save
npm install @tensorflow/tfjs-node --save
npm install nsfwjs --save

注意:安装 @tensorflow/tfjs-node 需要用到 python, 建议添加到用户环境变量 Path 中

2. 运行 WebAPI 服务

nsfwjs 作者提供了一个简单的 server.js 来提供 WebAPI 服务,为方便复制到这里

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
const express = require('express')
const multer = require('multer')
const jpeg = require('jpeg-js')
 
const tf = require('@tensorflow/tfjs-node')
const nsfw = require('nsfwjs')
 
const app = express()
const upload = multer()
 
let _model
 
const convert = async (img) => {
  // Decoded image in UInt8 Byte array
  const image = await jpeg.decode(img, true)
 
  const numChannels = 3
  const numPixels = image.width * image.height
  const values = new Int32Array(numPixels * numChannels)
 
  for (let i = 0; i < numPixels; i++)
    for (let c = 0; c < numChannels; ++c)
      values[i * numChannels + c] = image.data[i * 4 + c]
 
  return tf.tensor3d(values, [image.height, image.width, numChannels], 'int32')
}
 
app.post('/nsfw', upload.single('image'), async (req, res) => {
  if (!req.file) res.status(400).send('Missing image multipart/form-data')
  else {
    const image = await convert(req.file.buffer)
    const predictions = await _model.classify(image)
    image.dispose()
    res.json(predictions)
  }
})
 
const load_model = async () => {
  _model = await nsfw.load() //you can specify module here
}
 
// Keep the model in memory, make sure it's loaded only once
load_model().then(() => app.listen(8080))

尝试运行这个服务 ( 注意这个app仅支持jpeg格式的图片 )

node server.js

用 curl 测试

curl --request POST localhost:8080/nsfw --header 'Content-Type: multipart/form-data' --data-binary 'image=@myimg.jpg'

想简单些,可以写成这样

curl -F "image=@myimg.jpg" "http://localhost:8080/nsfw"

Windows 下可以通过 Postman 来测试。

3. .net 封装调用

nsfwjs 的 WebAPI 服务能跑起来了,用 .net 封装调用就很简单了

3.1 首先通过 process 启动 node server.js,可以通过输出重定向隐藏控制台

3.2 通过 HttpClient 或者RestSharp 等客户端组件提交需要鉴别的图片,返回结果

3.3 想要分析视频,还可以参考下这篇文章:FFMPEG获取视频关键帧并保存成jpg图像(ps:文末介绍)。

通过调用 ffmpeg 或者使用 FFMpeg.AutoGen 编程实现截图

运行效果上来看还是不错的,200K 以内的图片一般都能在 200ms 内返回鉴别结果。

ps:下面看下FFMPEG获取视频关键帧并保存成jpg图像

1、命令行方式

1秒取1帧 r:rate

ffmpeg -i input.mp4 -f image2 -r 1  dstPath/image-%03d.jpg

提取I帧

ffmpeg -i input.mp4 -an -vf select='eq(pict_type\,I)' -vsync 2 -s 720*480 -f image2  dstPath/image-%03d.jpg

2、代码方式

提取I帧

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
//source: keyframe.cpp
#include <iostream>
#include <cstdio>
#include <cstring>
 
#define __STDC_CONSTANT_MACROS
 
extern "C"
{
#include <libavutil/imgutils.h>
#include <libavutil/samplefmt.h>
#include <libavutil/timestamp.h>
#include <libavutil/opt.h>
#include <libavcodec/avcodec.h>
#include <libavutil/channel_layout.h>
#include <libavutil/common.h>
#include <libavutil/imgutils.h>
#include <libavutil/mathematics.h>
#include <libavutil/samplefmt.h>
#include <libavutil/pixfmt.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <jpeglib.h>
}
 
using namespace std;
 
char errbuf[256];
char timebuf[256];
static AVFormatContext *fmt_ctx = NULL;
static AVCodecContext *video_dec_ctx = NULL;
static int width, height;
static enum AVPixelFormat pix_fmt;
static AVStream *video_stream = NULL;
static const char *src_filename = NULL;
static const char *output_dir = NULL;
static int video_stream_idx = -1;
static AVFrame *frame = NULL;
static AVFrame *pFrameRGB = NULL;
static AVPacket pkt;
static struct SwsContext *pSWSCtx = NULL;
static int video_frame_count = 0;
 
/* Enable or disable frame reference counting. You are not supposed to support
 * both paths in your application but pick the one most appropriate to your
 * needs. Look for the use of refcount in this example to see what are the
 * differences of API usage between them. */
static int refcount = 0;
static void jpg_save(uint8_t *pRGBBuffer, int iFrame, int width, int height);
 
static int decode_packet(int *got_frame, int cached)
{
    int ret = 0;
    int decoded = pkt.size;
    *got_frame = 0;
 
    if (pkt.stream_index == video_stream_idx)
    {
        /* decode video frame */
        ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
        if (ret < 0)
        {
            fprintf(stderr, "Error decoding video frame (%s)\n", av_make_error_string(errbuf, sizeof(errbuf), ret));
            return ret;
        }
        if (*got_frame)
        {
            if (frame->width != width || frame->height != height ||
                frame->format != pix_fmt)
            {
                /* To handle this change, one could call av_image_alloc again and
                 * decode the following frames into another rawvideo file. */
                fprintf(stderr, "Error: Width, height and pixel format have to be "
                                "constant in a rawvideo file, but the width, height or "
                                "pixel format of the input video changed:\n"
                                "old: width = %d, height = %d, format = %s\n"
                                "new: width = %d, height = %d, format = %s\n",
                        width, height, av_get_pix_fmt_name(pix_fmt),
                        frame->width, frame->height,
                        av_get_pix_fmt_name(frame->format));
                return -1;
            }
 
            video_frame_count++;
            static int iFrame = 0;
            if (frame->key_frame == 1) //如果是关键帧
            {
                sws_scale(pSWSCtx, frame->data, frame->linesize, 0,
                          video_dec_ctx->height,
                          pFrameRGB->data, pFrameRGB->linesize);
                // 保存到磁盘
                iFrame++;
                jpg_save(pFrameRGB->data[0], iFrame, width, height);
            }
        }
    }
    /* If we use frame reference counting, we own the data and need
     * to de-reference it when we don't use it anymore */
    if (*got_frame && refcount)
        av_frame_unref(frame);
    return decoded;
}
 
static int open_codec_context(int *stream_idx,
                              AVCodecContext **dec_ctx, AVFormatContext *fmt_ctx, enum AVMediaType type)
{
    int ret, stream_index;
    AVStream *st;
    AVCodec *dec = NULL;
    AVDictionary *opts = NULL;
    ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
    if (ret < 0)
    {
        fprintf(stderr, "Could not find %s stream in input file '%s'\n",
                av_get_media_type_string(type), src_filename);
        return ret;
    }
    else
    {
        stream_index = ret;
        st = fmt_ctx->streams[stream_index];
        /* find decoder for the stream */
        dec = avcodec_find_decoder(st->codecpar->codec_id);
        if (!dec)
        {
            fprintf(stderr, "Failed to find %s codec\n",
                    av_get_media_type_string(type));
            return AVERROR(EINVAL);
        }
        /* Allocate a codec context for the decoder */
        *dec_ctx = avcodec_alloc_context3(dec);
        if (!*dec_ctx)
        {
            fprintf(stderr, "Failed to allocate the %s codec context\n",
                    av_get_media_type_string(type));
            return AVERROR(ENOMEM);
        }
        /* Copy codec parameters from input stream to output codec context */
        if ((ret = avcodec_parameters_to_context(*dec_ctx, st->codecpar)) < 0)
        {
            fprintf(stderr, "Failed to copy %s codec parameters to decoder context\n",
                    av_get_media_type_string(type));
            return ret;
        }
        /* Init the decoders, with or without reference counting */
        av_dict_set(&opts, "refcounted_frames", refcount ? "1" : "0", 0);
        if ((ret = avcodec_open2(*dec_ctx, dec, &opts)) < 0)
        {
            fprintf(stderr, "Failed to open %s codec\n",
                    av_get_media_type_string(type));
            return ret;
        }
        *stream_idx = stream_index;
    }
    return 0;
}
 
static int get_format_from_sample_fmt(const char **fmt, enum AVSampleFormat sample_fmt)
{
    int i;
    struct sample_fmt_entry
    {
        enum AVSampleFormat sample_fmt;
        const char *fmt_be, *fmt_le;
    } sample_fmt_entries[] = {
        {AV_SAMPLE_FMT_U8, "u8", "u8"},
        {AV_SAMPLE_FMT_S16, "s16be", "s16le"},
        {AV_SAMPLE_FMT_S32, "s32be", "s32le"},
        {AV_SAMPLE_FMT_FLT, "f32be", "f32le"},
        {AV_SAMPLE_FMT_DBL, "f64be", "f64le"},
    };
    *fmt = NULL;
    for (i = 0; i < FF_ARRAY_ELEMS(sample_fmt_entries); i++)
    {
        struct sample_fmt_entry *entry = &sample_fmt_entries[i];
        if (sample_fmt == entry->sample_fmt)
        {
            *fmt = AV_NE(entry->fmt_be, entry->fmt_le);
            return 0;
        }
    }
    fprintf(stderr,
            "sample format %s is not supported as output format\n",
            av_get_sample_fmt_name(sample_fmt));
    return -1;
}
 
int main(int argc, char **argv)
{
    int ret = 0, got_frame;
    int numBytes = 0;
    uint8_t *buffer;
    if (argc != 3 && argc != 4)
    {
        fprintf(stderr, "usage: %s [-refcount] input_file ouput_dir\n"
                        "API example program to show how to read frames from an input file.\n"
                        "This program reads frames from a file, decodes them, and writes bmp keyframes\n"
                        "If the -refcount option is specified, the program use the\n"
                        "reference counting frame system which allows keeping a copy of\n"
                        "the data for longer than one decode call.\n"
                        "\n",
                argv[0]);
        exit(1);
    }
 
    if (argc == 4 && !strcmp(argv[1], "-refcount"))
    {
        refcount = 1;
        argv++;
    }
 
    src_filename = argv[1];
    output_dir = argv[2];
 
    /* open input file, and allocate format context */
    if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0)
    {
        fprintf(stderr, "Could not open source file %s\n", src_filename);
        exit(1);
    }
 
    /* retrieve stream information */
    if (avformat_find_stream_info(fmt_ctx, NULL) < 0)
    {
        fprintf(stderr, "Could not find stream information\n");
        exit(1);
    }
 
    if (open_codec_context(&video_stream_idx, &video_dec_ctx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0)
    {
        video_stream = fmt_ctx->streams[video_stream_idx];
        /* allocate image where the decoded image will be put */
        width = video_dec_ctx->width;
        height = video_dec_ctx->height;
        pix_fmt = video_dec_ctx->pix_fmt;
    }
    else
    {
        goto end;
    }
 
    /* dump input information to stderr */
    av_dump_format(fmt_ctx, 0, src_filename, 0);
    if (!video_stream)
    {
        fprintf(stderr, "Could not find video stream in the input, aborting\n");
        ret = 1;
        goto end;
    }
 
    pFrameRGB = av_frame_alloc();
    numBytes = avpicture_get_size(AV_PIX_FMT_BGR24, width, height);
    buffer = av_malloc(numBytes);
    avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_BGR24, width, height);
    pSWSCtx = sws_getContext(width, height, pix_fmt, width, height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
 
    frame = av_frame_alloc();
    if (!frame)
    {
        fprintf(stderr, "Could not allocate frame\n");
        ret = AVERROR(ENOMEM);
        goto end;
    }
 
    /* initialize packet, set data to NULL, let the demuxer fill it */
    av_init_packet(&pkt);
    pkt.data = NULL;
    pkt.size = 0;
 
    if (video_stream)
        printf("Demuxing video from file '%s' to dir: %s\n", src_filename, output_dir);
 
    /* read frames from the file */
    while (av_read_frame(fmt_ctx, &pkt) >= 0)
    {
        AVPacket orig_pkt = pkt;
        do
        {
            ret = decode_packet(&got_frame, 0);
            if (ret < 0)
                break;
            pkt.data += ret;
            pkt.size -= ret;
        } while (pkt.size > 0);
        av_packet_unref(&orig_pkt);
    }
 
    /* flush cached frames */
    pkt.data = NULL;
    pkt.size = 0;
 
end:
    if (video_dec_ctx)
        avcodec_free_context(&video_dec_ctx);
    if (fmt_ctx)
        avformat_close_input(&fmt_ctx);
    if (buffer)
        av_free(buffer);
    if (pFrameRGB)
        av_frame_free(&pFrameRGB);
    if (frame)
        av_frame_free(&frame);
    return ret < 0;
}
 
static void jpg_save(uint8_t *pRGBBuffer, int iFrame, int width, int height)
{
 
    struct jpeg_compress_struct cinfo;
 
    struct jpeg_error_mgr jerr;
 
    char szFilename[1024];
    int row_stride;
 
    FILE *fp;
    JSAMPROW row_pointer[1]; // 一行位图
    cinfo.err = jpeg_std_error(&jerr);
    jpeg_create_compress(&cinfo);
 
    sprintf(szFilename, "%s/image-%03d.jpg", output_dir, iFrame); //图片名字为视频名+号码
    fp = fopen(szFilename, "wb");
 
    if (fp == NULL)
        return;
 
    jpeg_stdio_dest(&cinfo, fp);
 
    cinfo.image_width = width; // 为图的宽和高,单位为像素
    cinfo.image_height = height;
    cinfo.input_components = 3;     // 在此为1,表示灰度图, 如果是彩色位图,则为3
    cinfo.in_color_space = JCS_RGB; //JCS_GRAYSCALE表示灰度图,JCS_RGB表示彩色图像
 
    jpeg_set_defaults(&cinfo);
    jpeg_set_quality(&cinfo, 80, 1);
 
    jpeg_start_compress(&cinfo, TRUE);
 
    row_stride = cinfo.image_width * 3; //每一行的字节数,如果不是索引图,此处需要乘以3
 
    // 对每一行进行压缩
    while (cinfo.next_scanline < cinfo.image_height)
    {
        row_pointer[0] = &(pRGBBuffer[cinfo.next_scanline * row_stride]);
        jpeg_write_scanlines(&cinfo, row_pointer, 1);
    }
 
    jpeg_finish_compress(&cinfo);
    jpeg_destroy_compress(&cinfo);
 
    fclose(fp);
}
cat Makefile
keyframe:keyframe.cpp
    g++ $< -o $@ `pkg-config --libs libavcodec libavformat libswscale libavutil` -ljpeg -fpermissive

到此这篇关于.net 通过 WebAPI 调用 nsfwjs 进行视频鉴别的文章就介绍到这了,更多相关.net WebAPI 视频鉴别内容请搜索服务器之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持服务器之家!

原文链接:https://www.cnblogs.com/towerbit/p/15307207.html

延伸 · 阅读

精彩推荐
  • C#C#图片处理3种高级应用

    C#图片处理3种高级应用

    本文介绍C#图片处理高级应用,这些功能并无多大技术含量。全部基于.Net Framework类库完成,代码中包含了C#图片处理的一些基础知识,与大家分享,个人能...

    吴剑11472021-10-29
  • C#几分钟搞懂c#之FileStream对象读写大文件(推荐)

    几分钟搞懂c#之FileStream对象读写大文件(推荐)

    这篇文章主要介绍了c#之FileStream对象读写大文件,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面...

    牛掰是怎么形成的7802022-07-21
  • C#如何在C#中使用Dapper ORM

    如何在C#中使用Dapper ORM

    这篇文章主要介绍了如何在C#中使用Dapper ORM,帮助大家更好的理解和学习使用c#,感兴趣的朋友可以了解下 ...

    一线码农5242022-11-09
  • C#C#实现Base64编码与解码及规则

    C#实现Base64编码与解码及规则

    这篇文章主要介绍了C#实现Base64编码与解码,本文给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友可以参考下...

    厦门哈韩5662022-11-30
  • C#C#中字符串编码处理

    C#中字符串编码处理

    C#中字符串编码处理,需要的朋友可以参考一下...

    C#菜鸟教程4622020-12-19
  • C#C#生成PDF文件流

    C#生成PDF文件流

    这篇文章主要为大家详细介绍了C#生成PDF文件流的相关资料,具有一定的参考价值,感兴趣的小伙伴们可以参考一下...

    恝置4422021-12-30
  • C#c# 如何更简单的使用Polly

    c# 如何更简单的使用Polly

    这篇文章主要介绍了c# 如何更简单的使用Polly,帮助大家更好的理解和学习使用c#,感兴趣的朋友可以了解下...

    victor.x.qu7222022-11-09
  • C#Unity3d 如何更改Button的背景色

    Unity3d 如何更改Button的背景色

    这篇文章主要介绍了unity3d GUI.Button 自定义字体大小及透明背景方式,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧...

    无涯Andy9302022-11-13