Anjum48
Anjum48 t1_j4wjh0m wrote
Reply to [D] Do you know of any model capable of detecting generative model(GPT) generated text ? by CaptainDifferent3116
I came across this one last week which the author says is a fine-tuned BERT model: https://originality.ai/
Anjum48 t1_j4vc9kp wrote
Reply to comment by kingdroopa in [D] Suggestion for approaching img-to-img? by kingdroopa
The Unet I described will output a continuous number for each pixel between 0 & 1, which you can use as a proxy for your IR image.
People often use a threshold to this image (e.g. 0.5) to create a mask which might be where you are getting confused
Anjum48 t1_j4v8mpm wrote
Reply to comment by kingdroopa in [D] Suggestion for approaching img-to-img? by kingdroopa
+1 for UNets. Since IR will be a single channel you could use a single class semantic segmentation-type model (i.e. a UNet with a 1-channel output passed through a sigmoid). Something like this would get you started:
model = sm.Unet('resnet34', classes=1, activation='sigmoid')
Edit: Forgot the link for the package I'm referencing: https://github.com/qubvel/segmentation_models
Many of the most popular encoders/backbones are implemented in that package
Edit 2: Is the FOV important? If you could resize the images so that the RGB & IR FOV are equivalent then that would make things a lot simpler
Anjum48 t1_j40gm5q wrote
Reply to comment by CuriousCesarr in [P] Looking for someone with good NN/ deep learning experience for a paid project by CuriousCesarr
Ah ok. On the first point I guess whoever you are looking for will need to spend a considerable amount of time building/finding a dataset to train a model.
On the second point, I might have incorrectly assumed you were familiar with the Zillow controversy around price prediction.
The TL;DR is that the ML team used a model to forecast prices using a tool made by Facebook called Prophet. The model was probably accurate enough for displaying a rough prediction on a website. Another team in Zillow started using these price predictions to flip houses and lost a whole bunch of money since the model was not designed to do this.
A lot of armchair data scientists quickly pointed the finger at Prophet for being a "bad" model. The reality is all models are bad if they are used for the wrong reason. In this case, the team flipping houses likely didn't listen to the data science team when they said the model shouldn't be used for that purpose.
This is why it's a good idea to know how the model outputs are going to be used. The obvious answer is always "as accurate as possible" but sometimes that might not be accurate enough...
Hope this helps!
Anjum48 t1_j40cts5 wrote
Reply to [P] Looking for someone with good NN/ deep learning experience for a paid project by CuriousCesarr
- do you have a dataset? 2) how accurate do each of these outputs need to be for the task they are going to be used for? (See Zillow)
Anjum48 t1_j3lgduo wrote
Are you using the "en_core_web_trf" model in Spacy which is based on the roberta-base transformer model?
If that model is still not accurate enough, you may need to look into using the Hugging Face transformers library and try some more recent transformer models, e.g. deberta
Anjum48 t1_j38bc0o wrote
Reply to comment by hotspicynoodles in [P] Defect detection system for welding by hotspicynoodles
Does the data include the type of material being welded (and possibly thickness)? I think certain metals e.g. titanium, stainless, etc. may need different torch cups and therefore different flow rates
Anjum48 t1_j4zazrm wrote
Reply to comment by CaptainDifferent3116 in [D] Do you know of any model capable of detecting generative model(GPT) generated text ? by CaptainDifferent3116
Oops - didn't realise that. Apologies