{"id":1391,"date":"2015-02-06T03:32:57","date_gmt":"2015-02-06T03:32:57","guid":{"rendered":"https:\/\/dev.railscarma.com\/components-hadoop\/"},"modified":"2024-01-10T10:00:21","modified_gmt":"2024-01-10T10:00:21","slug":"composants-hadoop","status":"publish","type":"post","link":"https:\/\/www.railscarma.com\/fr\/blog\/articles-techniques\/composants-hadoop\/","title":{"rendered":"Composants de Hadoop"},"content":{"rendered":"<div data-elementor-type=\"wp-post\" data-elementor-id=\"1391\" class=\"elementor elementor-1391\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-68105f4a elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"68105f4a\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-6c5d4a59\" data-id=\"6c5d4a59\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-6ef8345 elementor-widget elementor-widget-text-editor\" data-id=\"6ef8345\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\n<p><strong><a href=\"\/fr\/blog\/articles-techniques\/outil-traitement-big-data-hadoop\/\" target=\"_blank\" rel=\"noreferrer noopener\">L&#039;article pr\u00e9c\u00e9dent<\/a><\/strong> vous a donn\u00e9 un aper\u00e7u de Hadoop et des deux composants de Hadoop qui sont HDFS et le framework Mapreduce. Cet article va maintenant vous donner une br\u00e8ve explication sur l&#039;architecture HDFS et son fonctionnement.<\/p>\n\n<p><b>HDFS\u00a0:<\/b><\/p>\n\n<p>Le syst\u00e8me de fichiers distribu\u00e9s Hadoop (HDFS) est un stockage en cluster \u00e0 large bande passante auto-r\u00e9parateur. HDFS a une architecture ma\u00eetre\/esclave. Un cluster HDFS est constitu\u00e9 d&#039;un seul NameNode, un serveur ma\u00eetre qui g\u00e8re l&#039;espace de noms du syst\u00e8me de fichiers et r\u00e9gule l&#039;acc\u00e8s aux fichiers par les clients. De plus, il existe un certain nombre de n\u0153uds de donn\u00e9es, g\u00e9n\u00e9ralement un par n\u0153ud dans le cluster, qui g\u00e8rent le stockage attach\u00e9 aux n\u0153uds sur lesquels ils s&#039;ex\u00e9cutent.<\/p>\n\n<p>HDFS expose un espace de noms de syst\u00e8me de fichiers et permet de stocker les donn\u00e9es utilisateur dans des fichiers. En interne, un fichier est divis\u00e9 en un ou plusieurs blocs et ces blocs sont stock\u00e9s dans un ensemble de DataNodes. Le NameNode ex\u00e9cute des op\u00e9rations d&#039;espace de noms du syst\u00e8me de fichiers telles que l&#039;ouverture, la fermeture et le renommage de fichiers et de r\u00e9pertoires.<\/p>\n\n<p>Il d\u00e9termine \u00e9galement le mappage des blocs vers DataNOdes. Les DataNodes sont charg\u00e9s de r\u00e9pondre aux demandes de lecture et d&#039;\u00e9criture des clients du syst\u00e8me de fichiers. Les DataNodes effectuent \u00e9galement la cr\u00e9ation, la suppression et la r\u00e9plication de blocs sur instruction du NameNode.<\/p>\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><a href=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2024\/01\/graphics.gif\"><img decoding=\"async\" class=\"wp-image-1397\" src=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2024\/01\/graphics.gif\" \/><\/a><\/figure>\n<\/div>\n\n<p><strong>Le croquis ci-dessus repr\u00e9sente l&#039;architecture du HDFS.<\/strong><\/p>\n\n<p><b>MapR\u00e9duire\u00a0:<\/b><\/p>\n\n<p>L&#039;autre concept et composant de Hadoop est Mapreduce. Mapreduce est une gestion et une planification distribu\u00e9es des ressources tol\u00e9rantes aux pannes, coupl\u00e9es \u00e0 une abstraction de programmation de donn\u00e9es \u00e9volutive.<\/p>\n\n<p>Il s&#039;agit d&#039;un cadre de traitement de donn\u00e9es parall\u00e8le. Le framework Mapreduce est utilis\u00e9 pour extraire les donn\u00e9es des diff\u00e9rents fichiers et n\u0153uds de donn\u00e9es disponibles dans un syst\u00e8me. La premi\u00e8re partie est que les donn\u00e9es doivent \u00eatre pouss\u00e9es sur les diff\u00e9rents serveurs o\u00f9 les fichiers seraient r\u00e9pliqu\u00e9s, en bref, il s&#039;agit de stocker les donn\u00e9es. donn\u00e9es.<\/p>\n\n<p>La deuxi\u00e8me \u00e9tape, une fois les donn\u00e9es stock\u00e9es, le code doit \u00eatre pouss\u00e9 sur le cluster Hadoop vers le n\u0153ud de nom qui serait distribu\u00e9 sur diff\u00e9rents n\u0153uds de donn\u00e9es qui deviendraient les n\u0153uds de calcul, puis l&#039;utilisateur final recevrait le r\u00e9sultat final.<\/p>\n\n<p>Mapreduce dans Hadoop n&#039;est pas seulement la seule fonction en cours, diff\u00e9rentes t\u00e2ches sont impliqu\u00e9es comme le lecteur d&#039;enregistrement, la carte, le combinateur, le partitionneur, la lecture al\u00e9atoire, le tri et la r\u00e9duction des donn\u00e9es et enfin donne le r\u00e9sultat. Il divise l&#039;ensemble de donn\u00e9es d&#039;entr\u00e9e en morceaux ind\u00e9pendants qui sont trait\u00e9s par les t\u00e2ches cartographiques de mani\u00e8re compl\u00e8tement parall\u00e8le.<\/p>\n\n<p>Le framework trie les sorties des cartes, qui sont ensuite pouss\u00e9es comme entr\u00e9e dans les t\u00e2ches r\u00e9duites. G\u00e9n\u00e9ralement, l&#039;entr\u00e9e et la sortie du travail sont stock\u00e9es dans un syst\u00e8me de fichiers. Le framework s&#039;occupe \u00e9galement de la planification, en les surveillant et en r\u00e9ex\u00e9cutant les t\u00e2ches ayant \u00e9chou\u00e9.<\/p>\n\n<p><b>Paire cl\u00e9-valeur Mapreduce\u00a0:<\/b><\/p>\n\n<p>Les mappeurs et les r\u00e9ducteurs utilisent toujours des paires cl\u00e9-valeur comme entr\u00e9e et sortie. Un r\u00e9ducteur r\u00e9duit les valeurs par cl\u00e9 uniquement. Un mappeur ou un r\u00e9ducteur peut \u00e9mettre 0,1 ou plusieurs paires cl\u00e9-valeur pour chaque entr\u00e9e. Les mappeurs et les r\u00e9ducteurs peuvent \u00e9mettre des cl\u00e9s ou des valeurs arbitraires, pas seulement des sous-ensembles ou des transformations de celles de l&#039;entr\u00e9e.<\/p>\n\n<p><b>Exemple:<\/b><\/p>\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><i>def map (cl\u00e9, valeur, contexte)<\/i><\/p>\n<p><i>value.to_s.split.each faire |word|<\/i><\/p>\n<p><i>mot.gsub!(\/W\/, &quot;)<\/i><\/p>\n<p><i>mot.downcase!<\/i><\/p>\n<p><i>\u00e0 moins que mot.vide ?<\/i><\/p>\n<p><i>contexte.write(Hadoop::Io::Text.new(word), Hadoop::Io::IntWritable.new(1))<\/i><\/p>\n<p><i>fin<\/i><\/p>\n<p><i>fin<\/i><\/p>\n<p><i>fin<\/i><\/p>\n<p><i>def r\u00e9duire (cl\u00e9, valeurs, contexte)<\/i><\/p>\n<p><i>somme = 0<\/i><\/p>\n<p><i>valeurs.each { |valeur| somme += valeur.get }<\/i><\/p>\n<p><i>contexte.write(cl\u00e9, Hadoop::Io::IntWritable.new(sum))<\/i><\/p>\n<p><i>fin<\/i><\/p>\n<\/blockquote>\n\n<p>La m\u00e9thode Mapper se divise en espaces, supprime tous les caract\u00e8res autres que des mots et les minuscules. Il g\u00e9n\u00e8re un un comme valeur. La m\u00e9thode de r\u00e9duction consiste \u00e0 parcourir les valeurs, \u00e0 additionner tous les nombres et \u00e0 afficher la cl\u00e9 d&#039;entr\u00e9e et la somme.<\/p>\n\n<p><b>Fichier d&#039;entr\u00e9e\u00a0:<\/b> <span style=\"color: #000000;\">Bonjour le monde, au revoir le monde<\/span><\/p>\n\n<p><span style=\"color: #000000;\"><b>fichier de sortie:<\/b><\/span><span style=\"color: #000000;\"> Au revoir 1<\/span><\/p>\n\n<p><span style=\"color: #000000;\"> Bonjour 1<\/span><\/p>\n\n<p><span style=\"color: #000000;\"> Monde 2<\/span><\/p>\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><a href=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2024\/01\/graphics1.gif\"><img decoding=\"async\" class=\"wp-image-1398\" src=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2024\/01\/graphics1.gif\" \/><\/a><\/figure>\n<\/div>\n\n<p>Ainsi se termine le briefing sur les composants de Hadoop, leur architecture, leur fonctionnement ainsi que les \u00e9tapes impliqu\u00e9es dans les diff\u00e9rents processus se d\u00e9roulant dans les deux syst\u00e8mes de Hadoop.<\/p>\n\n<p>Il y a aussi quelques avantages et inconv\u00e9nients de Hadoop, tout comme une pi\u00e8ce compos\u00e9e de deux faces, qui seront discut\u00e9s dans les prochains blogs. Une connaissance compl\u00e8te de n\u2019importe quel concept n\u2019est possible qu\u2019une fois que vous connaissez les avantages et les inconv\u00e9nients d\u2019un concept particulier.<\/p>\n\n<p>D\u00e9sormais pour acqu\u00e9rir une connaissance compl\u00e8te de Hadoop, continuez \u00e0 suivre les prochains articles du blog.<\/p>\n\n<p><span style=\"font-size: large;\"><b>Les deux visages de Hadoop<\/b><\/span><\/p>\n\n<p><span style=\"color: #000000;\"><b>Avantages:<\/b><\/span><\/p>\n\n<ul class=\"wp-block-list\">\n<li><span style=\"color: #000000;\">Hadoop est une plate-forme qui offre \u00e0 la fois des capacit\u00e9s de stockage distribu\u00e9 et de calcul.<\/span><\/li>\n<li><span style=\"color: #000000;\">Hadoop est extr\u00eamement \u00e9volutif. En fait, Hadoop a \u00e9t\u00e9 le premier \u00e0 \u00eatre envisag\u00e9 pour r\u00e9soudre un probl\u00e8me d&#039;\u00e9volutivit\u00e9 qui existait dans Nutch : commencer \u00e0 1 To\/3 n\u0153uds et atteindre des p\u00e9taoctets\/1 000 n\u0153uds.<\/span><\/li>\n<li><span style=\"color: #000000;\">L&#039;un des composants majeurs de Hadoop est HDFS (le composant de stockage) optimis\u00e9 pour un d\u00e9bit \u00e9lev\u00e9.<\/span><\/li>\n<li><span style=\"color: #000000;\">HDFS utilise des blocs de grande taille, ce qui en fin de compte fonctionne mieux lors de la manipulation de fichiers volumineux (gigaoctets, p\u00e9taoctets\u2026).<\/span><\/li>\n<li><span style=\"color: #000000;\">L&#039;\u00e9volutivit\u00e9 et la disponibilit\u00e9 sont les caract\u00e9ristiques distinctives de HDFS pour r\u00e9aliser la r\u00e9plication des donn\u00e9es et le syst\u00e8me de tol\u00e9rance aux pannes.<\/span><\/li>\n<li><span style=\"color: #000000;\">HDFS peut r\u00e9pliquer des fichiers un nombre de fois sp\u00e9cifi\u00e9 (la valeur par d\u00e9faut est 3 r\u00e9pliques), ce qui tol\u00e8re les pannes logicielles et mat\u00e9rielles. De plus, il peut r\u00e9pliquer automatiquement les blocs de donn\u00e9es sur les n\u0153uds en panne.<\/span><\/li>\n<li><span style=\"color: #000000;\">Hadoop utilise le framework MapReduce qui est un framework informatique distribu\u00e9 par lots. Il permet un travail en parall\u00e8le sur une grande quantit\u00e9 de donn\u00e9es.<\/span><\/li>\n<li><span style=\"color: #000000;\">MapReduce permet aux d\u00e9veloppeurs de se concentrer uniquement sur la r\u00e9ponse aux besoins de l&#039;entreprise, plut\u00f4t que de s&#039;impliquer dans les complications des syst\u00e8mes distribu\u00e9s.<\/span><\/li>\n<li><span style=\"color: #000000;\">Pour obtenir une ex\u00e9cution parall\u00e8le et plus rapide du travail, MapReduce d\u00e9compose le travail en t\u00e2ches Map &amp; Reduction et les planifie pour une ex\u00e9cution \u00e0 distance sur l&#039;esclave ou les n\u0153uds de donn\u00e9es du cluster Hadoop.<\/span><\/li>\n<li><span style=\"color: #000000;\">Hadoop a la capacit\u00e9 de travailler avec des t\u00e2ches MR cr\u00e9\u00e9es dans d&#039;autres langues \u2013 c&#039;est ce qu&#039;on appelle le streaming.<\/span><\/li>\n<li><span style=\"color: #000000;\">adapt\u00e9 \u00e0 l\u2019analyse du big data<\/span><\/li>\n<li><span style=\"color: #000000;\">Le S3 d&#039;Amazon est ici la source ultime de v\u00e9rit\u00e9 et HDFS est \u00e9ph\u00e9m\u00e8re. Vous n&#039;avez pas \u00e0 vous soucier de la fiabilit\u00e9, etc. \u2013 Amazon S3 s&#039;en charge pour vous. Cela signifie \u00e9galement que vous n&#039;avez pas besoin d&#039;un facteur de r\u00e9plication \u00e9lev\u00e9 dans HDFS.<\/span><\/li>\n<li><span style=\"color: #000000;\">Vous pouvez profiter de fonctionnalit\u00e9s d\u2019archivage int\u00e9ressantes comme Glacier.\u00a0<\/span><\/li>\n<li><span style=\"color: #000000;\">Vous payez \u00e9galement pour le calcul uniquement lorsque vous en avez besoin. Il est bien connu que la plupart des installations Hadoop ont du mal \u00e0 atteindre m\u00eame l&#039;utilisation du 40% [3],[4]. Si votre utilisation est faible, la cr\u00e9ation de clusters \u00e0 la demande peut \u00eatre une solution gagnante pour vous.\u00a0<\/span><\/li>\n<li><span style=\"color: #000000;\">Un autre point cl\u00e9 est que vos charges de travail peuvent conna\u00eetre des pics (par exemple en fin de semaine ou de mois) ou augmenter chaque mois. Vous pouvez lancer des clusters plus grands lorsque vous en avez besoin et vous en tenir \u00e0 des clusters plus petits sinon. <\/span><\/li>\n<li><span style=\"color: #000000;\">Vous n\u2019\u00eates pas oblig\u00e9 de pr\u00e9voir en permanence une charge de travail de pointe. De m\u00eame, vous n&#039;avez pas besoin de planifier votre mat\u00e9riel 2 \u00e0 3 ans \u00e0 l&#039;avance, comme c&#039;est une pratique courante avec les clusters internes. Vous pouvez payer au fur et \u00e0 mesure, grandir \u00e0 votre guise. Cela r\u00e9duit consid\u00e9rablement les risques li\u00e9s aux projets Big Data.<\/span><\/li>\n<li><span style=\"color: #000000;\">Vos co\u00fbts d\u2019administration peuvent \u00eatre consid\u00e9rablement r\u00e9duits, r\u00e9duisant ainsi votre TCO.\u00a0<\/span><\/li>\n<li><span style=\"color: #000000;\">Pas de frais d\u2019\u00e9quipement initiaux. Vous pouvez d\u00e9marrer autant de n\u0153uds que vous le souhaitez, aussi longtemps que vous en avez besoin, puis l&#039;arr\u00eater. Il est de plus en plus facile d&#039;ex\u00e9cuter Hadoop sur eux.<\/span><\/li>\n<li><span style=\"color: #000000;\">Aspects \u00e9conomiques \u2013 Co\u00fbt par To \u00e0 une fraction des options traditionnelles.<\/span><\/li>\n<li><span style=\"color: #000000;\">Flexibilit\u00e9 \u2013 Stockez toutes les donn\u00e9es, ex\u00e9cutez n&#039;importe quelle analyse.<\/span><\/li>\n<\/ul>\n\n<p><span style=\"color: #000000;\"><b>les inconv\u00e9nients:<\/b><\/span><\/p>\n\n<ul class=\"wp-block-list\">\n<li><span style=\"color: #000000;\">Comme vous le savez, Hadoop utilise HDFS et MapReduce, leurs deux processus ma\u00eetres sont des points de d\u00e9faillance uniques, bien qu&#039;un travail actif soit en cours pour les versions haute disponibilit\u00e9.<\/span><\/li>\n<li><span style=\"color: #000000;\">Jusqu&#039;\u00e0 la version Hadoop 2.x, HDFS et MapReduce utiliseront des mod\u00e8les \u00e0 ma\u00eetre unique, ce qui peut entra\u00eener des points de d\u00e9faillance uniques.<\/span><\/li>\n<li><span style=\"color: #000000;\">La s\u00e9curit\u00e9 est \u00e9galement l&#039;une des pr\u00e9occupations majeures car Hadoop propose un mod\u00e8le de s\u00e9curit\u00e9. Mais celui-ci est d\u00e9sactiv\u00e9 par d\u00e9faut en raison de sa grande complexit\u00e9.<\/span><\/li>\n<li><span style=\"color: #000000;\">Hadoop n&#039;offre pas de chiffrement au niveau du stockage ou du r\u00e9seau, ce qui constitue une tr\u00e8s grande pr\u00e9occupation pour les donn\u00e9es des applications du secteur gouvernemental.<\/span><\/li>\n<li><span style=\"color: #000000;\">HDFS est inefficace pour g\u00e9rer les petits fichiers et manque de compression transparente. Comme HDFS n&#039;est pas con\u00e7u pour fonctionner correctement avec des lectures al\u00e9atoires sur de petits fichiers en raison de son optimisation pour un d\u00e9bit soutenu.<\/span><\/li>\n<li><span style=\"color: #000000;\">MapReduce est une architecture bas\u00e9e sur des lots, ce qui signifie qu&#039;elle ne se pr\u00eate pas aux cas d&#039;utilisation n\u00e9cessitant un acc\u00e8s aux donn\u00e9es en temps r\u00e9el.<\/span><\/li>\n<li><span style=\"color: #000000;\">MapReduce est une architecture sans partage. Par cons\u00e9quent, les t\u00e2ches qui n\u00e9cessitent une synchronisation globale ou le partage de donn\u00e9es mutables ne conviennent pas, ce qui peut poser des d\u00e9fis \u00e0 certains algorithmes.<\/span><\/li>\n<li><span style=\"color: #000000;\">S3 n&#039;est pas tr\u00e8s rapide et les performances S3 d&#039;Apache Hadoop vanille ne sont pas excellentes. Chez Qubole, nous avons travaill\u00e9 sur les performances de Hadoop avec le syst\u00e8me de fichiers S3.<\/span><\/li>\n<li><span style=\"color: #000000;\">Bien entendu, S3 a son propre co\u00fbt de stockage.\u00a0<\/span><\/li>\n<li><span style=\"color: #000000;\">Si vous souhaitez conserver les machines (ou les donn\u00e9es) pendant une longue p\u00e9riode, ce n&#039;est pas une solution aussi \u00e9conomique qu&#039;un cluster physique.<\/span><\/li>\n<\/ul>\n\n<p><span style=\"color: #222222;\">Ici se termine le briefing de <a href=\"\/fr\/blog\/articles-techniques\/introduction-big-data\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Big Data<\/strong><\/a> et Hadoop et ses diff\u00e9rents syst\u00e8mes et leurs avantages et inconv\u00e9nients. J&#039;aurais aim\u00e9 avoir un aper\u00e7u du concept du Big Data et de Hadoop.<\/span><\/p>\n\n<p><a href=\"\/fr\/contactez-nous\/\">Prenez contact avec nous.<\/a><\/p>\n\n<p><strong>Manasa Heggere <\/strong><\/p>\n\n<p>D\u00e9veloppeur senior Ruby on Rails<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t  <div class=\"related-post slider\">\r\n        <div class=\"headline\">Articles Similaires<\/div>\r\n    <div class=\"post-list owl-carousel\">\r\n\r\n            <div class=\"item\">\r\n            <div class=\"thumb post_thumb\">\r\n    <a  title=\"Gemme de Kaminari\" href=\"https:\/\/www.railscarma.com\/fr\/blog\/articles-techniques\/joyau-kaminari\/?related_post_from=37277\">\r\n\r\n      <img decoding=\"async\" width=\"800\" height=\"300\" src=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2023\/04\/kaminari-gem.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"joyau kaminari\" srcset=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2023\/04\/kaminari-gem.jpg 800w, https:\/\/www.railscarma.com\/wp-content\/uploads\/2023\/04\/kaminari-gem-300x113.jpg 300w, https:\/\/www.railscarma.com\/wp-content\/uploads\/2023\/04\/kaminari-gem-768x288.jpg 768w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\r\n\r\n    <\/a>\r\n  <\/div>\r\n\r\n  <a class=\"title post_title\"  title=\"Gemme de Kaminari\" href=\"https:\/\/www.railscarma.com\/fr\/blog\/articles-techniques\/joyau-kaminari\/?related_post_from=37277\">\r\n        Gemme de Kaminari  <\/a>\r\n\r\n        <\/div>\r\n              <div class=\"item\">\r\n            <div class=\"thumb post_thumb\">\r\n    <a  title=\"Pourquoi engager des d\u00e9veloppeurs Ruby on Rails en 2026 ?\" href=\"https:\/\/www.railscarma.com\/fr\/blog\/ror\/pourquoi-embaucher-des-developpeurs-ruby-on-rails\/?related_post_from=30627\">\r\n\r\n      <img decoding=\"async\" width=\"800\" height=\"300\" src=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2019\/01\/why-to-hire-ruby-on-rails-developers-in-2022.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"pourquoi embaucher des d\u00e9veloppeurs Ruby on Rails en 2022\" srcset=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2019\/01\/why-to-hire-ruby-on-rails-developers-in-2022.jpg 800w, https:\/\/www.railscarma.com\/wp-content\/uploads\/2019\/01\/why-to-hire-ruby-on-rails-developers-in-2022-300x113.jpg 300w, https:\/\/www.railscarma.com\/wp-content\/uploads\/2019\/01\/why-to-hire-ruby-on-rails-developers-in-2022-768x288.jpg 768w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\r\n\r\n    <\/a>\r\n  <\/div>\r\n\r\n  <a class=\"title post_title\"  title=\"Pourquoi engager des d\u00e9veloppeurs Ruby on Rails en 2026 ?\" href=\"https:\/\/www.railscarma.com\/fr\/blog\/ror\/pourquoi-embaucher-des-developpeurs-ruby-on-rails\/?related_post_from=30627\">\r\n        Pourquoi engager des d\u00e9veloppeurs Ruby on Rails en 2026 ?  <\/a>\r\n\r\n        <\/div>\r\n              <div class=\"item\">\r\n            <div class=\"thumb post_thumb\">\r\n    <a  title=\"Importance de l&#039;architecture logicielle dans le d\u00e9veloppement de logiciels d&#039;entreprise\" href=\"https:\/\/www.railscarma.com\/fr\/blog\/ror\/importance-de-larchitecture-logicielle-dans-le-developpement-de-logiciels-dentreprise\/?related_post_from=36250\">\r\n\r\n      <img decoding=\"async\" width=\"800\" height=\"300\" src=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2022\/06\/Importance-of-Software-Architecture-in-enterprise-software-development.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"Importance de l&#039;architecture logicielle dans le d\u00e9veloppement de logiciels d&#039;entreprise\" srcset=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2022\/06\/Importance-of-Software-Architecture-in-enterprise-software-development.jpg 800w, https:\/\/www.railscarma.com\/wp-content\/uploads\/2022\/06\/Importance-of-Software-Architecture-in-enterprise-software-development-300x113.jpg 300w, https:\/\/www.railscarma.com\/wp-content\/uploads\/2022\/06\/Importance-of-Software-Architecture-in-enterprise-software-development-768x288.jpg 768w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\r\n\r\n    <\/a>\r\n  <\/div>\r\n\r\n  <a class=\"title post_title\"  title=\"Importance de l&#039;architecture logicielle dans le d\u00e9veloppement de logiciels d&#039;entreprise\" href=\"https:\/\/www.railscarma.com\/fr\/blog\/ror\/importance-de-larchitecture-logicielle-dans-le-developpement-de-logiciels-dentreprise\/?related_post_from=36250\">\r\n        Importance de l&#039;architecture logicielle dans le d\u00e9veloppement de logiciels d&#039;entreprise  <\/a>\r\n\r\n        <\/div>\r\n              <div class=\"item\">\r\n            <div class=\"thumb post_thumb\">\r\n    <a  title=\"Ruby IDE\u00a0: les meilleurs IDE pour le d\u00e9veloppement Ruby on Rails\" href=\"https:\/\/www.railscarma.com\/fr\/blog\/ror\/ruby-ide-les-meilleures-idees-pour-le-developpement-de-ruby-on-rails\/?related_post_from=36125\">\r\n\r\n      <img decoding=\"async\" width=\"800\" height=\"300\" src=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2022\/01\/BEST-IDES-FOR-RUBY-ON-RAILS-DEVELOPMENT.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"MEILLEURES ID\u00c9ES POUR LE D\u00c9VELOPPEMENT DE RUBY ON RAILS\" srcset=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2022\/01\/BEST-IDES-FOR-RUBY-ON-RAILS-DEVELOPMENT.jpg 800w, https:\/\/www.railscarma.com\/wp-content\/uploads\/2022\/01\/BEST-IDES-FOR-RUBY-ON-RAILS-DEVELOPMENT-300x113.jpg 300w, https:\/\/www.railscarma.com\/wp-content\/uploads\/2022\/01\/BEST-IDES-FOR-RUBY-ON-RAILS-DEVELOPMENT-768x288.jpg 768w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\r\n\r\n    <\/a>\r\n  <\/div>\r\n\r\n  <a class=\"title post_title\"  title=\"Ruby IDE\u00a0: les meilleurs IDE pour le d\u00e9veloppement Ruby on Rails\" href=\"https:\/\/www.railscarma.com\/fr\/blog\/ror\/ruby-ide-les-meilleures-idees-pour-le-developpement-de-ruby-on-rails\/?related_post_from=36125\">\r\n        Ruby IDE\u00a0: les meilleurs IDE pour le d\u00e9veloppement Ruby on Rails  <\/a>\r\n\r\n        <\/div>\r\n      \r\n  <\/div>\r\n\r\n  <script>\r\n      <\/script>\r\n  <style>\r\n    .related-post {}\r\n\r\n    .related-post .post-list {\r\n      text-align: left;\r\n          }\r\n\r\n    .related-post .post-list .item {\r\n      margin: 10px;\r\n      padding: 10px;\r\n          }\r\n\r\n    .related-post .headline {\r\n      font-size: 14px !important;\r\n      color: #999999 !important;\r\n          }\r\n\r\n    .related-post .post-list .item .post_thumb {\r\n      max-height: 220px;\r\n      margin: 10px 0px;\r\n      padding: 0px;\r\n      display: block;\r\n          }\r\n\r\n    .related-post .post-list .item .post_title {\r\n      font-size: 14px;\r\n      color: #000000;\r\n      margin: 10px 0px;\r\n      padding: 0px;\r\n      display: block;\r\n      text-decoration: none;\r\n          }\r\n\r\n    .related-post .post-list .item .post_excerpt {\r\n      font-size: 12px;\r\n      color: #3f3f3f;\r\n      margin: 10px 0px;\r\n      padding: 0px;\r\n      display: block;\r\n      text-decoration: none;\r\n          }\r\n\r\n    .related-post .owl-dots .owl-dot {\r\n          }\r\n\r\n      <\/style>\r\n      <script>\r\n      jQuery(document).ready(function($) {\r\n        $(\".related-post .post-list\").owlCarousel({\r\n          items: 2,\r\n          responsiveClass: true,\r\n          responsive: {\r\n            0: {\r\n              items: 1,\r\n            },\r\n            768: {\r\n              items: 2,\r\n            },\r\n            1200: {\r\n              items: 2,\r\n            }\r\n          },\r\n                      rewind: true,\r\n                                loop: true,\r\n                                center: false,\r\n                                autoplay: true,\r\n            autoplayHoverPause: true,\r\n                                nav: true,\r\n            navSpeed: 1000,\r\n            navText: ['<i class=\"fas fa-chevron-left\"><\/i>', '<i class=\"fas fa-chevron-right\"><\/i>'],\r\n                                dots: false,\r\n            dotsSpeed: 1200,\r\n                                                    rtl: false,\r\n          \r\n        });\r\n      });\r\n    <\/script>\r\n  <\/div>","protected":false},"excerpt":{"rendered":"<p>L'article pr\u00e9c\u00e9dent vous a donn\u00e9 un aper\u00e7u de Hadoop et de ses deux composants, \u00e0 savoir HDFS et le cadre Mapreduce. Cet article va maintenant vous donner une br\u00e8ve explication de l'architecture HDFS et de son fonctionnement. HDFS : Le syst\u00e8me de fichiers distribu\u00e9s Hadoop (HDFS) est un syst\u00e8me de stockage en grappe \u00e0 large bande passante et \u00e0 autor\u00e9paration. HDFS dispose d'un ...<\/p>\n<p class=\"read-more\"> <a class=\"\" href=\"https:\/\/www.railscarma.com\/fr\/blog\/ruby-regex-match-guide-with-examples\/\"> <span class=\"screen-reader-text\">Guide de correspondance des expressions rationnelles en Ruby (2026) avec exemples<\/span> Lire la suite \u00bb<\/a><\/p>","protected":false},"author":1,"featured_media":32049,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[384],"tags":[621,622,623,624,626],"class_list":["post-1391","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technical-articles","tag-big-data","tag-data","tag-hadoop","tag-hadoop-software","tag-software-framework"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.8 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Components of Hadoop - RailsCarma Blog<\/title>\n<meta name=\"description\" content=\"The Hadoop Distributed File System(HDFS) is self-healing high-bandwidth clustered storage. HDFS has a master\/slave architecture.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.railscarma.com\/fr\/blog\/articles-techniques\/composants-hadoop\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Components of Hadoop - RailsCarma Blog\" \/>\n<meta property=\"og:description\" content=\"The Hadoop Distributed File System(HDFS) is self-healing high-bandwidth clustered storage. HDFS has a master\/slave architecture.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.railscarma.com\/fr\/blog\/articles-techniques\/composants-hadoop\/\" \/>\n<meta property=\"og:site_name\" content=\"RailsCarma - Ruby on Rails Development Company specializing in Offshore Development\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/RailsCarma\/\" \/>\n<meta property=\"article:published_time\" content=\"2015-02-06T03:32:57+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-01-10T10:00:21+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2015\/02\/big_data_component.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"800\" \/>\n\t<meta property=\"og:image:height\" content=\"300\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@railscarma\" \/>\n<meta name=\"twitter:site\" content=\"@railscarma\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\/\/www.railscarma.com\/#\/schema\/person\/5f2228a2dec7549056e709de6eb85d21\"},\"headline\":\"Components of Hadoop\",\"datePublished\":\"2015-02-06T03:32:57+00:00\",\"dateModified\":\"2024-01-10T10:00:21+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/\"},\"wordCount\":1449,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.railscarma.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2015\/02\/big_data_component.jpg\",\"keywords\":[\"big data\",\"data\",\"hadoop\",\"hadoop software\",\"software framework\"],\"articleSection\":[\"Technical Articles\"],\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/\",\"url\":\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/\",\"name\":\"Components of Hadoop - RailsCarma Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.railscarma.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2015\/02\/big_data_component.jpg\",\"datePublished\":\"2015-02-06T03:32:57+00:00\",\"dateModified\":\"2024-01-10T10:00:21+00:00\",\"description\":\"The Hadoop Distributed File System(HDFS) is self-healing high-bandwidth clustered storage. HDFS has a master\/slave architecture.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#primaryimage\",\"url\":\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2015\/02\/big_data_component.jpg\",\"contentUrl\":\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2015\/02\/big_data_component.jpg\",\"width\":800,\"height\":300},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.railscarma.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Components of Hadoop\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.railscarma.com\/#website\",\"url\":\"https:\/\/www.railscarma.com\/\",\"name\":\"RailsCarma - Ruby on Rails Development Company specializing in Offshore Development\",\"description\":\"RailsCarma is a Ruby on Rails Development Company in Bangalore. We specialize in Offshore Ruby on Rails Development based out in USA and India. Hire experienced Ruby on Rails developers for the ultimate Web Experience.\",\"publisher\":{\"@id\":\"https:\/\/www.railscarma.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.railscarma.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.railscarma.com\/#organization\",\"name\":\"RailsCarma\",\"url\":\"https:\/\/www.railscarma.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/www.railscarma.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2020\/08\/railscarma_logo.png\",\"contentUrl\":\"https:\/\/www.railscarma.com\/wp-content\/uploads\/2020\/08\/railscarma_logo.png\",\"width\":200,\"height\":46,\"caption\":\"RailsCarma\"},\"image\":{\"@id\":\"https:\/\/www.railscarma.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/RailsCarma\/\",\"https:\/\/x.com\/railscarma\",\"https:\/\/www.linkedin.com\/company\/railscarma\/\",\"https:\/\/myspace.com\/railscarma\",\"https:\/\/in.pinterest.com\/railscarma\/\",\"https:\/\/www.youtube.com\/channel\/UCx3Wil-aAnDARuatTEyMdpg\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.railscarma.com\/#\/schema\/person\/5f2228a2dec7549056e709de6eb85d21\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/www.railscarma.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/308867ca6c81f3aba146080c601000087180326f752c4116849ea9f514c6a4fa?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/308867ca6c81f3aba146080c601000087180326f752c4116849ea9f514c6a4fa?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\/\/www.railscarma.com\/hire-ruby-on-rails-developer\/\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Composants de Hadoop - RailsCarma Blog","description":"Le syst\u00e8me de fichiers distribu\u00e9s Hadoop (HDFS) est un stockage en cluster \u00e0 large bande passante auto-r\u00e9parateur. HDFS a une architecture ma\u00eetre\/esclave.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.railscarma.com\/fr\/blog\/articles-techniques\/composants-hadoop\/","og_locale":"fr_FR","og_type":"article","og_title":"Components of Hadoop - RailsCarma Blog","og_description":"The Hadoop Distributed File System(HDFS) is self-healing high-bandwidth clustered storage. HDFS has a master\/slave architecture.","og_url":"https:\/\/www.railscarma.com\/fr\/blog\/articles-techniques\/composants-hadoop\/","og_site_name":"RailsCarma - Ruby on Rails Development Company specializing in Offshore Development","article_publisher":"https:\/\/www.facebook.com\/RailsCarma\/","article_published_time":"2015-02-06T03:32:57+00:00","article_modified_time":"2024-01-10T10:00:21+00:00","og_image":[{"width":800,"height":300,"url":"https:\/\/www.railscarma.com\/wp-content\/uploads\/2015\/02\/big_data_component.jpg","type":"image\/jpeg"}],"author":"admin","twitter_card":"summary_large_image","twitter_creator":"@railscarma","twitter_site":"@railscarma","twitter_misc":{"\u00c9crit par":"admin","Dur\u00e9e de lecture estim\u00e9e":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#article","isPartOf":{"@id":"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/"},"author":{"name":"admin","@id":"https:\/\/www.railscarma.com\/#\/schema\/person\/5f2228a2dec7549056e709de6eb85d21"},"headline":"Components of Hadoop","datePublished":"2015-02-06T03:32:57+00:00","dateModified":"2024-01-10T10:00:21+00:00","mainEntityOfPage":{"@id":"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/"},"wordCount":1449,"commentCount":0,"publisher":{"@id":"https:\/\/www.railscarma.com\/#organization"},"image":{"@id":"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#primaryimage"},"thumbnailUrl":"https:\/\/www.railscarma.com\/wp-content\/uploads\/2015\/02\/big_data_component.jpg","keywords":["big data","data","hadoop","hadoop software","software framework"],"articleSection":["Technical Articles"],"inLanguage":"fr-FR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/","url":"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/","name":"Composants de Hadoop - RailsCarma Blog","isPartOf":{"@id":"https:\/\/www.railscarma.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#primaryimage"},"image":{"@id":"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#primaryimage"},"thumbnailUrl":"https:\/\/www.railscarma.com\/wp-content\/uploads\/2015\/02\/big_data_component.jpg","datePublished":"2015-02-06T03:32:57+00:00","dateModified":"2024-01-10T10:00:21+00:00","description":"Le syst\u00e8me de fichiers distribu\u00e9s Hadoop (HDFS) est un stockage en cluster \u00e0 large bande passante auto-r\u00e9parateur. HDFS a une architecture ma\u00eetre\/esclave.","breadcrumb":{"@id":"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#primaryimage","url":"https:\/\/www.railscarma.com\/wp-content\/uploads\/2015\/02\/big_data_component.jpg","contentUrl":"https:\/\/www.railscarma.com\/wp-content\/uploads\/2015\/02\/big_data_component.jpg","width":800,"height":300},{"@type":"BreadcrumbList","@id":"https:\/\/www.railscarma.com\/fr\/blog\/technical-articles\/composants-hadoop\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.railscarma.com\/"},{"@type":"ListItem","position":2,"name":"Components of Hadoop"}]},{"@type":"WebSite","@id":"https:\/\/www.railscarma.com\/#website","url":"https:\/\/www.railscarma.com\/","name":"RailsCarma - Soci\u00e9t\u00e9 de d\u00e9veloppement Ruby on Rails sp\u00e9cialis\u00e9e dans le d\u00e9veloppement offshore","description":"RailsCarma est une soci\u00e9t\u00e9 de d\u00e9veloppement Ruby on Rails \u00e0 Bangalore. Nous sommes sp\u00e9cialis\u00e9s dans le d\u00e9veloppement offshore Ruby on Rails, bas\u00e9s aux \u00c9tats-Unis et en Inde. Embauchez des d\u00e9veloppeurs Ruby on Rails exp\u00e9riment\u00e9s pour une exp\u00e9rience Web ultime.","publisher":{"@id":"https:\/\/www.railscarma.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.railscarma.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/www.railscarma.com\/#organization","name":"RailsCarma","url":"https:\/\/www.railscarma.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/www.railscarma.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.railscarma.com\/wp-content\/uploads\/2020\/08\/railscarma_logo.png","contentUrl":"https:\/\/www.railscarma.com\/wp-content\/uploads\/2020\/08\/railscarma_logo.png","width":200,"height":46,"caption":"RailsCarma"},"image":{"@id":"https:\/\/www.railscarma.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/RailsCarma\/","https:\/\/x.com\/railscarma","https:\/\/www.linkedin.com\/company\/railscarma\/","https:\/\/myspace.com\/railscarma","https:\/\/in.pinterest.com\/railscarma\/","https:\/\/www.youtube.com\/channel\/UCx3Wil-aAnDARuatTEyMdpg"]},{"@type":"Person","@id":"https:\/\/www.railscarma.com\/#\/schema\/person\/5f2228a2dec7549056e709de6eb85d21","name":"administrateur","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/www.railscarma.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/308867ca6c81f3aba146080c601000087180326f752c4116849ea9f514c6a4fa?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/308867ca6c81f3aba146080c601000087180326f752c4116849ea9f514c6a4fa?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/www.railscarma.com\/hire-ruby-on-rails-developer\/"]}]}},"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/www.railscarma.com\/fr\/wp-json\/wp\/v2\/posts\/1391","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.railscarma.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.railscarma.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.railscarma.com\/fr\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.railscarma.com\/fr\/wp-json\/wp\/v2\/comments?post=1391"}],"version-history":[{"count":0,"href":"https:\/\/www.railscarma.com\/fr\/wp-json\/wp\/v2\/posts\/1391\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.railscarma.com\/fr\/wp-json\/wp\/v2\/media\/32049"}],"wp:attachment":[{"href":"https:\/\/www.railscarma.com\/fr\/wp-json\/wp\/v2\/media?parent=1391"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.railscarma.com\/fr\/wp-json\/wp\/v2\/categories?post=1391"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.railscarma.com\/fr\/wp-json\/wp\/v2\/tags?post=1391"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}